what is the fastest and most reliable way of transferring a lot of files?

Have you considered Sneakernet? With large data sets overnight shipping is often going to be faster and cheaper than transferring via the Internet.


How? Or TL;DR

The fastest method I've found is a combination of tar, mbuffer and ssh.

E.g.:

tar zcf - bigfile.m4p | mbuffer -s 1K -m 512 | ssh otherhost "tar zxf -"

Using this I've achieved sustained local network transfers over 950 Mb/s on 1Gb links. Replace the paths in each tar command to be appropriate for what you're transferring.

Why? mbuffer!

The biggest bottleneck in transferring large files over a network is, by far, disk I/O. The answer to that is mbuffer or buffer. They are largely similar but mbuffer has some advantages. The default buffer size is 2MB for mbuffer and 1MB for buffer. Larger buffers are more likely to never be empty. Choosing a block size which is the lowest common multiple of the native block size on both the target and destination filesystem will give the best performance.

Buffering is the thing that makes all the difference! Use it if you have it! If you don't have it, get it! Using (m}?buffer plus anything is better than anything by itself. it is almost literally a panacea for slow network file transfers.

If you're transferring multiple files use tar to "lump" them together into a single data stream. If it's a single file you can use cat or I/O redirection. The overhead of tar vs. cat is statistically insignificant so I always use tar (or zfs -send where I can) unless it's already a tarball. Neither of these is guaranteed to give you metadata (and in particular cat will not). If you want metadata, I'll leave that as an exercise for you.

Finally, using ssh for a transport mechanism is both secure and carries very little overhead. Again, the overhead of ssh vs. nc is statistically insignificant.


You mention "rsync," so I assume you are using Linux:

Why don't you create a tar or tar.gz file? Network transfer time of one big file is faster than many small ones. You could even compress it if you wish...

Tar with no compression:

On the source server:

tar -cf file.tar /path/to/files/

Then on the receiving end:

cd /path/to/files/
tar -xf /path/to/file.tar

Tar with compression:

On the source server:

tar -czf file.tar.gz /path/to/files/

Then on the receiving end:

cd /path/to/files/
tar -xzf /path/to/file.tar.gz

You would simply use rsync to do the actual transfer of the (tar|tar.gz) files.