I'm archiving data from one server to another. Initially I started a rsync job. It took 2 weeks for it to build the file list just for 5 TB of data and another week to transfer 1 TB of data.

Then I had to kill the job as we need some down time on the new server.

It's been agreed that we will tar it up since we probably won't need to access it again. I was thinking of breaking it into 500 GB chunks. After I tar it then I was going to copy it across through ssh. I was using tar and pigz but it is still too slow.

Is there a better way to do it? I think both servers are on Redhat. Old server is Ext4 and the new one is XFS.

File sizes range from few kb to few mb and there are 24 million jpegs in 5TB. So I'm guessing around 60-80 million for 15TB.

edit: After playing with rsync, nc, tar, mbuffer and pigz for a couple of days. The bottleneck is going to be the disk IO. As the data is striped across 500 SAS disks and around 250 million jpegs. However, now I learnt about all these nice tools that I can use in future.


Solution 1:

I have had very good results using tar, pigz (parallel gzip) and nc.

Source machine:

tar -cf - -C /path/of/small/files . | pigz | nc -l 9876

Destination machine:

To extract:

nc source_machine_ip 9876 | pigz -d | tar -xf - -C /put/stuff/here

To keep archive:

nc source_machine_ip 9876 > smallstuff.tar.gz

If you want to see the transfer rate just pipe through pv after pigz -d!

Solution 2:

I'd stick to the rsync solution. Modern (3.0.0+) rsync uses incremental file list, so it does not have to build full list before transfer. So restarting it won't require you to do whole transfer again in case of trouble. Splitting the transfer per top or second level directory will optimize this even further. (I'd use rsync -a -P and add --compress if your network is slower than your drives.)

Solution 3:

Set up a VPN (if its internet), create a virtual drive of some format on the remote server (make it ext4), mount it on the remote server, then mount that on the local server (using a block-level protocol like iSCSI), and use dd or another block-level tool to do the transfer. You can then copy the files off the virtual drive to the real (XFS) drive at your own convenience.

Two reasons:

  1. No filesystem overhead, which is the main performance culprit
  2. No seeking, you're looking at sequential read/write on both sides