ZFS backup advice with another server

I currently have two servers both have the exact same hardware, disks, etc.

One server (server1) is going to be the "main" server. It's basically a server with raidz2 that has SMB shares on it that people connect to.

The other server (server2) is configured the same as server1 (raidz2) but is only to be for backing up server1. It's meant to be an offsite backup in the event we lose server1 from disk failure, fire, water damage, etc.

I'm trying to figure out the best way to do the backups to server2.

At first, I was thinking something like rsync. This is trivial to set up in a cron, and I can have it go once a week.

Alternatively, I was thinking of something with zfs send/recv. My understanding is that ZFS can do "snapshots" so I thought it would be great if I can create snapshots/incremental backups without losing a lot of space. I feel like this could be more difficult to implement/prone to errors.

As I said before, both servers are configured the same in terms of hardware and raidz2 layout. What would you all recommend for my current situation?


ZFS is very resilient. The most basic example of shipping a file system would be:

# zfs snapshot tank/test@tuesday
# zfs send tank/test@tuesday | ssh [email protected] "zfs receive pool/test"

Note the snapshot prior to the send (and sending the snapshot).

You could wrap that up into a script to delete the local snapshot after you've sent it to the remote -- or keep it if you've got the disk space to be able to. Keep previous snapshots on the backup server.

Source and highly recommended reading: https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/


I would use incremental ZFS send/receive. It should be more efficient than rsync as ZFS knows what has been changed since the previous snapshot without needing to explore the whole file system.

Assuming you want to fully backup a file system namen datapool/fs.

You first create a pool to store your backup on the destination server and a recursive snapshot on the source pool:

dest # zpool create datapool ...
source # zfs snapshot -r datapool/fs@snap1

then you send the whole data as an initial backup:

source # zfs send -R datapool/fs@snap1 | ssh dest zfs receive datapool/fs

Next week (or whatever period you like), you create a second snapshot on the source pool and send it incrementally on the destination on. That time, ZFS is smart enough to only send what has changed during the week (deleted, created and modified files). When a file is modified, it is not sent as a whole but only the modified blocks are transmitted and updated.

source # zfs snapshot -r datapool/fs@snap2
source # zfs send -ri snap1 datapool/fs@snap2 | 
            ssh dest zfs receive -F datapool/fs

Repeat the operation with incrementing the snapshot numbers each time you backup.

Remove the unused old snapshots on either servers when you no more need them.

If you have bandwidth constraints, you can compress/decompress data on the fly, for example with inserting gzip/zip commands in the pipeline or by enabling ssh compression.

source # zfs send -ri snap1 datapool/fs@snap2 | gzip | 
            ssh dest "gunzip | zfs receive -F datapool/fs"

You might also leverage mbuffer get a steadier bandwidth usage, as described in this page:

dest # mbuffer -s 128k -m 1G -I 9090 | zfs receive datapool/fs

source # zfs send -i snap2 datapool/fs@snap3 | 
            mbuffer -s 128k -m 1G -O w.x.y.z:9090

Note: The zfs -r flag is not available with non Solaris ZFS implementations, see http://lists.freebsd.org/pipermail/freebsd-fs/2012-September/015074.html . In such case, don't use the -F flag on the target but instead explicitly rollback datasets. If new datasets have been created on the source, send them first independently before doing the snapshot + incremental send/receive.

Of course, if you have only one file system to backup without an underlying dataset hierarchy, or if you want to perform independent backups, the incremental backup is simpler to implement and should work identically whatever the ZFS implementation:

T0:

zfs snapshot datapool/fs@snap1
zfs send datapool/fs@snap1 | ssh dest zfs receive datapool/fs

T1:

zfs snapshot datapool/fs@snap2
zfs send -i snap1 datapool/fs@snap2 | 
            ssh dest zfs receive -F datapool/fs

I had problems using zfs send/receive to send 1 TB to a remote. I decided to break the single 1TB filesystem to contain several children. Now after a network failure at worst only a child needs to be resent. I use my script to take care of recursive sends and to keep the remote in sync: https://github.com/dareni/shellscripts/blob/master/zfsDup.sh

I hope this script can be of use to someone else.

Example output:

# zfsDup.sh shelltests
Test: array_add()
Test: array_clear()
Test: array_iterator()
Test: nameValidation()
Test: isValidSnapshot()
Test: getSnapshotFilesystems()
Test: getSnapshotData()
Test: getRemoteDestination()
Test: printElapsed()
Test: convertToBytes()
Shell tests completed, check the output for errors.

# zfsDup.sh zfstests
Start zfs tests.
Test: new parent file system.
Test: new child file system.
Test: simulate a failed send of the child filesystem.
Test: duplicate and check the child@2 snapshot is resent.
Test: snapshot existing files with updated child data.
Test: simulate a fail send os child@3
Test: snapshot test1.
Test: snapshot test2.
Test: snapshot test3.
Snapshot tests completed ok.
Test: remote host free space.
Test: new remote FS with no quota.
Test: incremental remote FS update with no quota.
Cleaning up zroot/tmp/zfsDupTest/dest zroot/tmp/zfsDupTest/source
Test execution time: 89secs
ZFS tests completed, check the output for errors.


# zfs list -t all -r ztest
NAME  USED  AVAIL  REFER  MOUNTPOINT
ztest  344K  448M  19K  /ztest
ztest@1  9K  -  19K  -
ztest@6  9K  -  19K  -
ztest/backup  112K  448M  19K  /ztest/backup
ztest/backup@1  9K  -  19K  -
ztest/backup@2  0  -  19K  -
ztest/backup@3  0  -  19K  -
ztest/backup@4  9K  -  19K  -
ztest/backup@5  0  -  19K  -
ztest/backup@6  0  -  19K  -
ztest/backup/data  57.5K  448M  20.5K  /ztest/backup/data
ztest/backup/data@1  0  -  19.5K  -
ztest/backup/data@2  0  -  19.5K  -
ztest/backup/data@3  9K  -  19.5K  -
ztest/backup/data@4  9K  -  19.5K  -
ztest/backup/data@5  0  -  20.5K  -
ztest/backup/data@6  0  -  20.5K  -

# zfs list -t all -r zroot/tmp
NAME  USED  AVAIL  REFER  MOUNTPOINT
zroot/tmp  38K  443M  19K  /tmp
zroot/tmp/zfsDupTest  19K  443M  19K  /tmp/zfsDupTest

# zfsDup.sh ztest zroot/tmp root@localhost
================================================================================
Starting duplication 20151001 16:10:56 ...
[email protected]
ztest/[email protected]
ztest/backup/[email protected]
Duplication complete 20151001 16:11:04.
================================================================================

# zfsDup.sh ztest zroot/tmp root@localhost
================================================================================
Starting duplication 20151001 16:11:25 ...
[email protected] to date
ztest/[email protected] to date
ztest/backup/[email protected] to date
Duplication complete 20151001 16:11:29.
================================================================================

# zfs snapshot -r ztest@7
# zfsDup.sh ztest zroot/tmp root@localhost
================================================================================
Starting duplication 20151001 16:12:25 ...
[email protected]
ztest/[email protected]
ztest/backup/[email protected]
Duplication complete 20151001 16:12:33.
================================================================================

# zfs list -t all -r zroot/tmp
NAME  USED  AVAIL  REFER  MOUNTPOINT
zroot/tmp  124K  442M  19K  /tmp
zroot/tmp/zfsDupTest  19K  442M  19K  /tmp/zfsDupTest
zroot/tmp/ztest  86K  442M  19K  /tmp/ztest
zroot/tmp/ztest@6  9K  -  19K  -
zroot/tmp/ztest@7  0  -  19K  -
zroot/tmp/ztest/backup  58K  442M  19K  /tmp/ztest/backup
zroot/tmp/ztest/backup@6  9K  -  19K  -
zroot/tmp/ztest/backup@7  0  -  19K  -
zroot/tmp/ztest/backup/data  30K  442M  20K  /tmp/ztest/backup/data
zroot/tmp/ztest/backup/data@6  10K  -  20K  -
zroot/tmp/ztest/backup/data@7  0  -  20K  -