Backing up Xen domains
Solution 1:
Compression for blank space
Let's take it back to basics from your snapshot. First, I'm going to ask you to look at why you're tarring up one file. Stop and think about what tar does for a bit and why you're doing that.
$ dd if=/dev/zero of=zero bs=$((1024*1024)) count=2048
2048+0 records in
2048+0 records out
2147483648 bytes transferred in 46.748718 secs (45936739 bytes/sec)
$ time gzip zero
real 1m0.333s
user 0m37.838s
sys 0m1.778s
$ ls -l zero.gz
-rw-r--r-- 1 user group 2084110 Mar 11 16:18 zero.gz
Given that, we can see that the compression gives us about a 1000:1 advantage on otherwise empty space. Compression works regardless of system support for sparse files. There are other algorithms that will tighten it up more, but for raw overall performance, gzip
wins.
Unix utilities and sparse files
Given a system with support for sparse files, dd
sometimes has an option to save the space. Curiously, my mac includes a version of dd
that has a conv=sparse
flag, but the HFS+ filesystem doesn't support it. Opposingly, a fresh Debian install I used for testing has support for sparse files in ext4, but that install of dd
doesn't have the flag. Go figure.
Thus, another exercise:
I copied /dev/zero into a file the same as above. It took up 2G of space on the filesystem as confirmed by du
, df
, and ls
. Then I used cp
on it and found myself with 2 files using 4GB of space. So, it's time to try another flag:
`cp --sparse=always sparse sparse2`
Using that forces cp to take a regular file and use sparse allocation whenever it sees a long string of zeroes. Now I've got 2 files that report as taking up 4GB according to ls
, but only 2GB according to du
and df
.
Now that I've got an sparse file, will cp behave? Yes. cp sparse2 sparse
results in having ls
show me 2GB of consumed space for each file, but du
shows them as taking up zero blocks on the filesystem. Conclusion: some utilities will respect an already sparse file, but most will write the entire thing back out. Even cp
doesn't know to turn a written file back to sparse unless you force its hand to try.
Next I created a 1MB file and made it a sparse entry, then tried editing it in vim
. Despite only entering a few characters, we're back to using the whole thing. A quick search found similar demonstration: https://unix.stackexchange.com/questions/17572/what-is-the-interaction-of-the-rsync-size-only-and-sparse-options
Sparse conclusions
So my thoughts given all this:
- Snapshot with LVM
- Run zerofree against the snapshot
- Use
rsync -S
to copy with sparse files resulting - If you can't use rsync, gzip your snapshot if you're transporting across the network and then run
cp --sparse=always
against the unexpanded image to create a sparse copy.
Differential backups
The problem downside with a differential backup on block devices is that things can move around a bit and generate large unwieldy diffs. There is some discussion on StackOverflow: https://stackoverflow.com/questions/4731035/binary-diff-and-patch-utility-for-a-virtual-machine-image that concluded the best use was xdelta. If you are going to do that, again try to zero out your empty space first.
Solution 2:
Your two questions...
dd just takes the sectors as an image. There is no way to tell it to skip blank spots; it will create a faithful image of the drive you're duplicating. However, if you redirect the output through a compression utility like zip or 7z the whitespace should shrink it down for nearly the same effect. It will still take time (as the dd utility is still duplicating the white space) but the size factor for storage will be greatly reduced; I have a 100+ gig disk image from VMWare that compresses to around 20 gig due to the unused space.
As for incrementally saving, not to my knowledge. How would dd know what has changed and what hasn't? It wasn't really meant for that. Incremental saves would most likely have to be done with a utility like rdiff-backup or rsync and compressing them, having that done at the file level.