Notes about open source software, computers, other stuff.

Tag: ZFS

Expanding a partition-backed ZFS special device

The spinning disk pool on my home server uses a mirrored special device (for storing metadata and small blocks, see also this blog post at Klara Systems) based on two NVMe SSDs. Because my home server only has two M.2 slots and I wanted to have a pure SSD ZFS pool as well, I partitioned the SSDs. Each SSD has a partition for the SSD pool and one for the special device of the storage pool (which uses a mirror of spinning disks).

Note: This isn’t really a recommended production setup as you are basically hurting performance of both the special device and the SSD pool. But for my home server this works fine. For example, I use the special device’s small blocks functionality to store previews of the photo’s I store on my Nextcloud server. This makes scrolling through the Memories app’s timeline a breeze, even though the full-size photo’s are stored on the spinning disks.

Today, I noticed that the special device had filled up, and, given that there was still some unpartitioned space on the SSDs, I wondered if I could just expand the partition (using parted) used by the special device and then have the ZFS pool recognise the extra space. In the past I have expanded partition-based ZFS pools before, e.g. on after upgrading the SSD on my laptop, but I hadn’t tried this with a special device before.

After some experimentation, I can tell you: this works.

Here is how I tested this on a throw-away file-backed zpool. First create four test files: two for the actual mirror pool and two that I’ll add as a special device.

for i in {0..3} ; do truncate -s 1G file$i.raw ; done
ls -lh
total 4,0K
-rw-rw-r-- 1 lennart lennart 1,0G mrt 11 12:46 file0.raw
-rw-rw-r-- 1 lennart lennart 1,0G mrt 11 12:46 file1.raw
-rw-rw-r-- 1 lennart lennart 1,0G mrt 11 12:46 file2.raw
-rw-rw-r-- 1 lennart lennart 1,0G mrt 11 12:46 file3.raw

Create a regular mirror pool:

zpool create testpool mirror $(pwd)/file0.raw $(pwd)/file1.raw
zpool list -v testpool
NAME                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
testpool                   960M   146K   960M        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                 960M   104K   960M        -         -     0%  0.01%      -    ONLINE
    /tmp/tests/file0.raw     1G      -      -        -         -      -      -      -    ONLINE
    /tmp/tests/file1.raw     1G      -      -        -         -      -      -      -    ONLINE

Add the special device to the zpool:

zpool add testpool special mirror $(pwd)/file2.raw $(pwd)/file3.raw
zpool list -v testpool
NAME                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
testpool                  1.88G   190K  1.87G        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                 960M   190K   960M        -         -     0%  0.01%      -    ONLINE
    /tmp/tests/file0.raw     1G      -      -        -         -      -      -      -    ONLINE
    /tmp/tests/file1.raw     1G      -      -        -         -      -      -      -    ONLINE
special                       -      -      -        -         -      -      -      -         -
  mirror-1                 960M      0   960M        -         -     0%  0.00%      -    ONLINE
    /tmp/tests/file2.raw     1G      -      -        -         -      -      -      -    ONLINE
    /tmp/tests/file3.raw     1G      -      -        -         -      -      -      -    ONLINE

I wasn’t sure whether I could just truncate the backing files for the special device to a larger size while they were part of the pool, so I detached them one by one and created new ones of 2GB, and then reattached them:

zpool detach testpool /tmp/tests/file2.raw
zpool list -v testpool
truncate -s 2G file2.raw
zpool attach testpool $(pwd)/file3.raw /tmp/tests/file2.raw
zpool list -v testpool

NAME                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
testpool                  1.88G   194K  1.87G        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                 960M   104K   960M        -         -     0%  0.01%      -    ONLINE
    /tmp/tests/file0.raw     1G      -      -        -         -      -      -      -    ONLINE
    /tmp/tests/file1.raw     1G      -      -        -         -      -      -      -    ONLINE
special                       -      -      -        -         -      -      -      -         -
  /tmp/tests/file3.raw       1G    90K   960M        -         -     0%  0.00%      -    ONLINE
NAME                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
testpool                  1.88G   278K  1.87G        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                 960M  97.5K   960M        -         -     0%  0.00%      -    ONLINE
    /tmp/tests/file0.raw     1G      -      -        -         -      -      -      -    ONLINE
    /tmp/tests/file1.raw     1G      -      -        -         -      -      -      -    ONLINE
special                       -      -      -        -         -      -      -      -         -
  mirror-1                 960M   180K   960M        -         -     0%  0.01%      -    ONLINE
    /tmp/tests/file3.raw     1G      -      -        -         -      -      -      -    ONLINE
    /tmp/tests/file2.raw     2G      -      -        -         -      -      -      -    ONLINE

And for the second “disk”:

zpool detach testpool /tmp/tests/file3.raw
zpool list -v testpool
truncate -s 2G file3.raw
zpool attach testpool $(pwd)/file2.raw /tmp/tests/file3.raw
zpool list -v testpool

NAME                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
testpool                  1.88G   218K  1.87G        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                 960M    66K   960M        -         -     0%  0.00%      -    ONLINE
    /tmp/tests/file0.raw     1G      -      -        -         -      -      -      -    ONLINE
    /tmp/tests/file1.raw     1G      -      -        -         -      -      -      -    ONLINE
special                       -      -      -        -         -      -      -      -         -
  /tmp/tests/file2.raw       2G   152K   960M        -        1G     0%  0.01%      -    ONLINE
NAME                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
testpool                  1.88G   296K  1.87G        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                 960M    54K   960M        -         -     0%  0.00%      -    ONLINE
    /tmp/tests/file0.raw     1G      -      -        -         -      -      -      -    ONLINE
    /tmp/tests/file1.raw     1G      -      -        -         -      -      -      -    ONLINE
special                       -      -      -        -         -      -      -      -         -
  mirror-1                 960M   242K   960M        -        1G     0%  0.02%      -    ONLINE
    /tmp/tests/file2.raw     2G      -      -        -        1G      -      -      -    ONLINE
    /tmp/tests/file3.raw     2G      -      -        -         -      -      -      -    ONLINE

And now, here comes the magic. Time to expand the pool and see of the special device will grow to 2GB:

zpool online -e testpool $(pwd)/file2.raw $(pwd)/file3.raw
zpool list -v testpool
NAME                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
testpool                  2.88G   226K  2.87G        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                 960M    48K   960M        -         -     0%  0.00%      -    ONLINE
    /tmp/tests/file0.raw     1G      -      -        -         -      -      -      -    ONLINE
    /tmp/tests/file1.raw     1G      -      -        -         -      -      -      -    ONLINE
special                       -      -      -        -         -      -      -      -         -
  mirror-1                1.94G   178K  1.94G        -         -     0%  0.00%      -    ONLINE
    /tmp/tests/file2.raw     2G      -      -        -         -      -      -      -    ONLINE
    /tmp/tests/file3.raw     2G      -      -        -         -      -      -      -    ONLINE

Yay! It worked! So, for my actual storage pool, I ended up doing the following:

  • Given that the partitions used by the special device are located at the end of the SSD, it was easy to expand them using parted resizepart.
  • Run zpool online -e storage partname-1 partname-2 (so I didn’t detach/attach here).

Don’t forget to clean up the testpool:

zpool destroy testpool
rm -r /tmp/tests

Related Images:

LXD container snapshots, ZFS snapshots and moving containers

Here, we investigate the behaviour of LXD when moving containers between LXD cluster nodes, with a focus on various types of (filesystem) snapshots.

LXD containers can be snapshot by LXD itself, but in case one uses a ZFS storage backend, one can also use a tool like Sanoid to make snapshots of a container’s filesystem. When moving an LXD container from one LXD cluster node to another, one, of course, wants those filesystem snapshots to move along as well. Spoiler: this isn’t always the case.

Let’s create a test container on my home LXD cluster (which uses ZFS as default storage backend), starting on node wiske2:

lxc launch ubuntu:22.04 snapmovetest --target=wiske2

Check the container is running:

lxc list snapmovetest

+--------------+---------+-----------------------+-------------------------------------------+-----------+-----------+----------+
|     NAME     |  STATE  |         IPV4          |                   IPV6                    |   TYPE    | SNAPSHOTS | LOCATION |
+--------------+---------+-----------------------+-------------------------------------------+-----------+-----------+----------+
| snapmovetest | RUNNING | 192.168.10.158 (eth0) | 2a10:3781:782:1:216:3eff:fed5:ef48 (eth0) | CONTAINER | 0         | wiske2   |
+--------------+---------+-----------------------+-------------------------------------------+-----------+-----------+----------+

Now, let’s use LXD to create two snapshots:

lxc snapshot snapmovetest "Test1"
sleep 10
lxc snapshot snapmovetest "Test2"

Check the snapshots have been made:

lxc info snapmovetest | awk '$1=="Snapshots:" {toprint=1}; {if(toprint==1) {print $0}}'

Snapshots:
+-------+----------------------+------------+----------+
| NAME  |       TAKEN AT       | EXPIRES AT | STATEFUL |
+-------+----------------------+------------+----------+
| Test1 | 2023/03/11 22:22 CET |            | NO       |
+-------+----------------------+------------+----------+
| Test2 | 2023/03/11 22:22 CET |            | NO       |
+-------+----------------------+------------+----------+

At the ZFS level:

zfs list -rtall rpool/lxd/containers/snapmovetest

NAME                                               USED  AVAIL     REFER  MOUNTPOINT
rpool/lxd/containers/snapmovetest                 24.7M   192G      748M  legacy
rpool/lxd/containers/snapmovetest@snapshot-Test1    60K      -      748M  -
rpool/lxd/containers/snapmovetest@snapshot-Test2    60K      -      748M  -

All is fine! Now, let’s move the container to node wiske3:

lxc stop snapmovetest
lxc move snapmovetest snapmovetest --target=wiske3
lxc list snapmovetest

+--------------+---------+------+------+-----------+-----------+----------+
|     NAME     |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS | LOCATION |
+--------------+---------+------+------+-----------+-----------+----------+
| snapmovetest | STOPPED |      |      | CONTAINER | 2         | wiske3   |
+--------------+---------+------+------+-----------+-----------+----------+

Check the snapshots:

lxc info snapmovetest | awk '$1=="Snapshots:" {toprint=1}; {if(toprint==1) {print $0}}'

Snapshots:
+-------+----------------------+------------+----------+
| NAME  |       TAKEN AT       | EXPIRES AT | STATEFUL |
+-------+----------------------+------------+----------+
| Test1 | 2023/03/11 22:22 CET |            | NO       |
+-------+----------------------+------------+----------+
| Test2 | 2023/03/11 22:22 CET |            | NO       |
+-------+----------------------+------------+----------+

At the ZFS level:

zfs list -rtall rpool/lxd/containers/snapmovetest

NAME                                               USED  AVAIL     REFER  MOUNTPOINT
rpool/lxd/containers/snapmovetest                  749M   202G      748M  legacy
rpool/lxd/containers/snapmovetest@snapshot-Test1    60K      -      748M  -
rpool/lxd/containers/snapmovetest@snapshot-Test2    60K      -      748M  -

So far so good: snapshots taken with the native LXD toolchain get moved. Now let’s manually create a ZFS snapshot:

zfs snapshot rpool/lxd/containers/snapmovetest@manual_zfs_snap
zfs list -rtall rpool/lxd/containers/snapmovetest

NAME                                                USED  AVAIL     REFER  MOUNTPOINT
rpool/lxd/containers/snapmovetest                   749M   202G      748M  legacy
rpool/lxd/containers/snapmovetest@snapshot-Test1     60K      -      748M  -
rpool/lxd/containers/snapmovetest@snapshot-Test2     60K      -      748M  -
rpool/lxd/containers/snapmovetest@manual_zfs_snap     0B      -      748M  -

Nove move the container back to node wiske2:

lxc move snapmovetest snapmovetest --target=wiske2
lxc list snapmovetest

+--------------+---------+------+------+-----------+-----------+----------+
|     NAME     |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS | LOCATION |
+--------------+---------+------+------+-----------+-----------+----------+
| snapmovetest | STOPPED |      |      | CONTAINER | 2         | wiske2   |
+--------------+---------+------+------+-----------+-----------+----------+

What happened to the snapshots?

lxc info snapmovetest | awk '$1=="Snapshots:" {toprint=1}; {if(toprint==1) {print $0}}'

Snapshots:
+-------+----------------------+------------+----------+
| NAME  |       TAKEN AT       | EXPIRES AT | STATEFUL |
+-------+----------------------+------------+----------+
| Test1 | 2023/03/11 22:22 CET |            | NO       |
+-------+----------------------+------------+----------+
| Test2 | 2023/03/11 22:22 CET |            | NO       |
+-------+----------------------+------------+----------+

zfs list -rtall rpool/lxd/containers/snapmovetest

NAME                                               USED  AVAIL     REFER  MOUNTPOINT
rpool/lxd/containers/snapmovetest                  749M   191G      748M  legacy
rpool/lxd/containers/snapmovetest@snapshot-Test1    60K      -      748M  -
rpool/lxd/containers/snapmovetest@snapshot-Test2    60K      -      748M  -

Somehow, the ZFS-level snapshot has been removed… I guess this part of the LXD manual should be written in bold (emphasis mine):

LXD assumes that it has full control over the ZFS pool and dataset. Therefore, you should never maintain any datasets or file system entities that are not owned by LXD in a ZFS pool or dataset, because LXD might delete them.

Consequently, in a LXD cluster one shouldn’t use Sanoid to make snapshots ZFS-backed LXD container filesystems. Instead, use LXD’s builtin automatic snapshot capabilities (see the snapshots.expiry and snapshots.schedule options).

Clean up:

lxc delete snapmovetest

Related Images:

Moving annual backups from an external disk with Ext4 to an external disk with ZFS

For a few years I have used the Christmas holidays to create a full
backup of my /home on an external hard disk. For that I used a
Bash script around rsync that uses hard links to keep the used disk
space under control. Each backup was saved in a directory named with
the date of the backup. POSIX ACLs were also backed up.

Since last year’s backup I have moved to ZFS (using ZFS on Linux
with Ubuntu 14.04
) as filesystem for /home (and others). Since ZFS
makes checksums of data and metadata it has the possibility to
detect corrupted files (and if the data is redundant it can also fix
them). This is a feature I’d like to have for my backups as
well: I’d rather know it when corruption occurs than live in
ignorance.

So the plan is to move the old backups from the external disk to the
ZFS pool in my server. and instead of using hard links I’ll transfer
the backups in order from old to new to the ZFS pool making a
snapshot for each. Additionally I will also turn on compression
(using the lz4 algorithm). Once that is done I will reformat the
external drive and create a ZFS pool called “JaarlijkseBackupPool” on
it (jaarlijks means annual in Dutch).

The old situation

In the current/old situation, this is how much disk space is used
on the external disk (with and without taking the hard links into
account):

$ sudo du -csh /mnt/JaarlijkseBackups/*
102G    /mnt/JaarlijkseBackups/2010-11-28
121G    /mnt/JaarlijkseBackups/2013-02-04
101G    /mnt/JaarlijkseBackups/2013-12-23
324G    total
$ sudo du -clsh /mnt/JaarlijkseBackups/*
102G    /mnt/JaarlijkseBackups/2010-11-28
193G    /mnt/JaarlijkseBackups/2013-02-04
255G    /mnt/JaarlijkseBackups/2013-12-23
549G    total

Copying the data from the Ext4 disk to a temporary ZFS filesystem on my server

The ZFS pool in my server is called storage. In order to save the
POSIX ACLs of the Ext4 system, they need to be enabled when
creating the ZFS filesystem as well. Setting xattr=sa means the
ACLS are stored more efficiently (although this option is not
compatible with other ZFS implementations at this time, so if I
would try to import the ZFS pool in FreeBSD for example, that
information would be inaccessible).

$ zfs create storage/JaarlijkseBackupsOrganized \
      -o compression=lz4 \
      -o acltype=posixacl \
      -o xattr=sa
$ sudo rsync -ahPAXHS --numeric-ids \
     /storage/JaarlijkseBackups/2010-11-28/ \
     /storage/JaarlijkseBackupsOrganized
$ zfs snapshot storage/JaarlijkseBackupsOrganized@2010-11-28

Followed by the same for the same rsync and zfs snapshot
commands for the other two dates.
Once that is finished, this is the status of that ZFS FS:

$ zfs list -r -t all storage/JaarlijkseBackupsOrganized
NAME                                            USED  AVAIL  REFER  MOUNTPOINT
storage/JaarlijkseBackupsOrganized              275G   438G   272G  /storage/JaarlijkseBackupsOrganized
storage/JaarlijkseBackupsOrganized@2010-11-28  1,03G      -  88,9G  -
storage/JaarlijkseBackupsOrganized@2013-02-04  2,33G      -   196G  -
storage/JaarlijkseBackupsOrganized@2013-12-23      0      -   272G  -
$ zfs get -r -t all compressratio storage/JaarlijkseBackupsOrganized
NAME                                           PROPERTY       VALUE  SOURCE
storage/JaarlijkseBackupsOrganized             compressratio  1.13x  -
storage/JaarlijkseBackupsOrganized@2010-11-28  compressratio  1.19x  -
storage/JaarlijkseBackupsOrganized@2013-02-04  compressratio  1.14x  -
storage/JaarlijkseBackupsOrganized@2013-12-23  compressratio  1.12x  -

Partitioning the external disk

The external disk is as 1TB Samsung SATA 3Gbps SpinPoint F2 EcoGreen disk
(type HD103SI, serial number: S1VSJD6ZB02657). The disk uses 512B
sectors:

sudo hdparm -I /dev/sdf |grep Sector
     Logical/Physical Sector size:           512 bytes

Before using it with ZFS, it needs to be partitioned. I used
parted:

$ parted /dev/sdf
GNU Parted 2.3
Using /dev/sdf
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: ATA SAMSUNG HD103SI (scsi)
Disk /dev/sdf: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  1000GB  1000GB  primary  ext4

(parted) mklabel
New disk label type? gpt
(parted) u
Unit?  [compact]? MB
(parted) p
Model: ATA SAMSUNG HD103SI (scsi)
Disk /dev/sdf: 1000205MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End  Size  File system  Name  Flags

(parted) mkpart
Partition name?  []? JaarlijkseBackups-HD103SI-S1VSJD6ZB02657
File system type?  [ext2]? zfs
Start? 1M
End? 1000204M
(parted) p
Model: ATA SAMSUNG HD103SI (scsi)
Disk /dev/sdf: 1000205MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End        Size       File system  Name                                  Flags
 1      1,05MB  1000204MB  1000203MB  ext4         JaarlijkseBackups-HD103SI-S1VSJD6ZB0

(parted) q

This removes the old partition table and creates a new GPT
partition table (which allows naming partitions). Next I set the
units to MB so I can leave 1MB at the beginning and end of the
partition (can be helpful when importing this pool in
e.g. FreeBSD). The disk also shows up in /dev/disk/by=partlabel
now.

Creating the new ZFS pool

$ zpool create -o ashift=9 JaarlijkseBackupPool \
    /dev/disk/by-partlabel/JaarlijkseBackups-HD103SI-S1VSJD6ZB0
$ zpool status JaarlijkseBackupPool
  pool: JaarlijkseBackupPool
 state: ONLINE
  scan: none requested
config:

        NAME                                    STATE     READ WRITE CKSUM
        JaarlijkseBackupPool                    ONLINE       0     0     0
          JaarlijkseBackups-HD103SI-S1VSJD6ZB0  ONLINE       0     0     0

errors: No known data errors

Migrating the data

Now that the new ZFS pool and filesystem are all in place, it is
time to move the backups to the new place, starting with the oldest
backup. The -R option also make sure the attributes like
compression and xattr are transferred to the new FS. The
following commands send each snapshot to the new pool (the -n
option of zfs receive is for doing a dry run, just to show how it
works). After the first snapshot is sent, the other two are sent
using the -i option to zfs send so that only the incremental
differences between the snapshots are sent.

$ zfs send -vR storage/JaarlijkseBackupsOrganized@2010-11-28 | \
      zfs receive -Fvu JaarlijkseBackupPool/oldRsyncBackups
$ zfs send -vR -i storage/JaarlijkseBackupsOrganized@2010-11-28 \
    storage/JaarlijkseBackupsOrganized@2013-02-04 | \
    zfs receive -Fvu JaarlijkseBackupPool/oldRsyncBackups
$ zfs send -vR -i storage/JaarlijkseBackupsOrganized@2013-02-04 \
      storage/JaarlijkseBackupsOrganized@2013-12-23 | \
      zfs receive -Fvu JaarlijkseBackupPool/oldRsyncBackups -n
send from @2013-02-04 to storage/JaarlijkseBackupsOrganized@2013-12-23 estimated size is 84,3G
total estimated size is 84,3G
TIME        SENT   SNAPSHOT
would receive incremental stream of storage/JaarlijkseBackupsOrganized@2013-12-23 into JaarlijkseBackupPool@2013-12-23
14:09:16   4,22M   storage/JaarlijkseBackupsOrganized@2013-12-23
14:09:17   8,46M   storage/JaarlijkseBackupsOrganized@2013-12-23
14:09:18   18,4M   storage/JaarlijkseBackupsOrganized@2013-12-23
14:09:19   24,8M   storage/JaarlijkseBackupsOrganized@2013-12-23
^C
$ zfs send -vR -i  storage/JaarlijkseBackupsOrganized@2013-02-04 \
      storage/JaarlijkseBackupsOrganized@2013-12-23 | \
      zfs receive -Fvu JaarlijkseBackupPool/oldRsyncBackups

Add this year’s backup

At first I tried to add the new backups also to the
oldRsyncBackups FS, but that didn’t work (at least not with an
incremental backup), so I ended up making a new backup. The extra
cost in disk space is not a real problem. Disk space is rather
cheap and the current configuration will last me at least one more
year. So after creating a snapshot called 2014-12-26 of my
/home I ran:

   $ zfs send -v  storage/home@2014-12-26 | \
      zfs receive -Fu JaarlijkseBackupPool/home
$ zfs list -r -t all JaarlijkseBackupPool
NAME                                              USED  AVAIL  REFER  MOUNTPOINT
JaarlijkseBackupPool                              581G   332G    30K  /JaarlijkseBackupPool
JaarlijkseBackupPool/home                         311G   332G   311G  /JaarlijkseBackupPool/home
JaarlijkseBackupPool/home@2014-12-26             51,2M      -   311G  -
JaarlijkseBackupPool/oldRsyncBackups              271G   332G   267G  /JaarlijkseBackupPool/oldRsyncBackups
JaarlijkseBackupPool/oldRsyncBackups@2010-11-28   974M      -  87,1G  -
JaarlijkseBackupPool/oldRsyncBackups@2013-02-04  2,23G      -   193G  -
JaarlijkseBackupPool/oldRsyncBackups@2013-12-23      0      -   267G  -
$ zfs get -r compressratio JaarlijkseBackupPool
NAME                                             PROPERTY       VALUE  SOURCE
JaarlijkseBackupPool                             compressratio  1.15x  -
JaarlijkseBackupPool/home                        compressratio  1.17x  -
JaarlijkseBackupPool/home@2014-12-26             compressratio  1.17x  -
JaarlijkseBackupPool/oldRsyncBackups             compressratio  1.13x  -
JaarlijkseBackupPool/oldRsyncBackups@2010-11-28  compressratio  1.19x  -
JaarlijkseBackupPool/oldRsyncBackups@2013-02-04  compressratio  1.14x  -
JaarlijkseBackupPool/oldRsyncBackups@2013-12-23  compressratio  1.12x  -

Finishing up

In order to be able to disconnect the external drive without
damaging the filesystems use

zpool export JaarlijkseBackupPool

Later, the drive/pool can be imported using the zpool import
command.

Now that the migration is done, the intermediate filesystem
(including the snapshots) can also be removed:

zfs destroy -r storage/JaarlijkseBackupsOrganized

For reference: the old rsync script

#!/bin/sh
#
# Time-stamp: <2013-02-04 16:48:31 (root)>
# This scripts helps me create my annual backups to an external hard
# disk. The script uses rsync's hard link option to make hard links to
# the previous backups for files that haven't changed. It makes the
# backup based on an LVM snapshot it creates of the LV that contains
# the /home partition.
# This script needs to be run as root.
 
today=`date +%F`
olddate="2013-02-04"
 
srcdir="/mnt/backupsrc/"
destdir="/mnt/backupdest/JaarlijkseBackups/$today"
prevdir="/mnt/backupdest/JaarlijkseBackups/$olddate"
 
# LVM options
VG=raid5vg
LV=home
 
# rstnc options
options="-ahPAXHS --numeric-ids"
exclusions="--exclude 'lost+found/'"
#  --exclude '*/.thumbnails'"
# exclusions="$exclusions --exclude '*/.gvfs/'"
# exclusions="$exclusions --exclude '*/.cache/' --exclude '**/Cache'"
# exclusions="$exclusions --exclude '*/.recycle/'"
 
# Check to see if the previous backup directory exists
if [ ! -d $prevdir ]; then
    echo "Error: The directory with the previous back up ($prevdir) doesn't exist" 1>&2
    exit 1
fi
 
# Make a snapshot of the home LV that we can backup
lvcreate -L15G -s -n snap$LV /dev/$VG/$LV
mount /dev/$VG/snap$LV $srcdir
 
 
# Start the backup, first a dry-run, then the full one
rsynccommand="rsync $options $exclusions --link-dest=$prevdir $srcdir $destdir"
 
$rsynccommand -n
 
# Wait for user input
echo "This was a dry run. Press a key to continue with the real stuff or"
echo "hit Ctrl-c to abort."
read dummy
 
$rsynccommand

Related Images:

Using rsync to backup a ZFS file system to a remote Synology Diskstation

Some time ago I moved from using LVM to using ZFS on my home server. This meant I also had to change the backup script I used to make backups on a remote Synology Diskstation. Below is the updated script. I also updated it such that it now needs a single command line argument: the hostname of the Diskstation to backup to (because I now have two Diskstations at different locations). If you want to run this script from cron you should set up key-based SSH login (see also here and here).

#!/bin/bash
#
# This script makes a backup of my home dirs to a Synology DiskStation at
# another location. I use ZFS for my /home, so I make a snapshot first and
# backup from there.
#
# This script requires that the first command line argument is the
# host name of the remote backup server (the Synology NAS). It also
# assumes that the location of the backups is the same on each
# remote backup server.
#
# Time-stamp: <2014-10-27 11:35:39 (L.C. Karssen)>
# This script it licensed under the GNU GPLv3.
 
set -u
 
if [ ${#} -lt 1 ]; then
    echo -n "ERROR: Please specify a host name as first command" 1>&2
    echo " line option" 1>&2
    exit -1
fi
 
###############################
# Some settings
###############################
# Options for the remote (Synology) backup destination
DESTHOST=$1
DESTUSER=root
DESTPATH=/volume1/Backups/
DEST=${DESTUSER}@${DESTHOST}:${DESTPATH}
 
# Options for the client (the data to be backed up)
# ZFS options
ZFS_POOL=storage
ZFS_DATASET=home
ZFS_SNAPSHOT=rsync_snapshot
SNAPDIR="/home/.zfs/snapshot/$ZFS_SNAPSHOT"
 
# Backup source path. Don't forget to have trailing / otherwise
# rsync's --delete option won't work
SRC=${SNAPDIR}/
 
# rsync options
OPTIONS="--delete -azvhHSP --numeric-ids --stats"
OPTIONS="$OPTIONS --timeout=60 --delete-excluded"
OPTIONS="$OPTIONS --skip-compress=gz/jpg/mp[34]/7z/bz2/ace/avi/deb/gpg/iso/jpeg/lz/lzma/lzo/mov/ogg/png/rar/CR2/JPG/MOV"
EXCLUSIONS="--exclude lost+found --exclude .thumbnails --exclude .gvfs"
EXCLUSIONS="$EXCLUSIONS --exclude .cache --exclude Cache"
EXCLUSIONS="$EXCLUSIONS --exclude .local/share/Trash"
EXCLUSIONS="$EXCLUSIONS --exclude home/lennart/tmp/Downloads/*.iso"
EXCLUSIONS="$EXCLUSIONS --exclude home/lennart/.recycle"
EXCLUSIONS="$EXCLUSIONS --exclude _dev_dvb_adapter0_Philips_TDA10023_DVB*"
 
 
 
###############################
# The real work
###############################
 
# Create the ZFS snapshot
if [ -d $SNAPDIR ]; then
    # If the directory exists, another backup process may be running
    echo "Directory $SNAPDIR already exists! Is another backup still running?"
    exit -1
else
    # Let's make snapshots
    zfs snapshot $ZFS_POOL/$ZFS_DATASET@$ZFS_SNAPSHOT
fi
 
 
# Do the actual backup
rsync -e 'ssh' $OPTIONS $EXCLUSIONS $SRC $DEST
 
# Remove the ZFS snapshot
if [ -d $SNAPDIR ]; then
    zfs destroy $ZFS_POOL/$ZFS_DATASET@$ZFS_SNAPSHOT
else
    echo "$SNAPDIR does not exist!" 1>&2
    exit 2
fi
 
exit 0

Related Images:

© 2024 Lennart's weblog

Theme by Anders NorénUp ↑