How can I easily (e.g. as easy as rsync or SCP) do a high speed (1gpbs) file transfer over an Internet-latency link?

A few times per year, I need to transfer massive files (20+GBs) over the Internet (latency 20-100ms), but with reasonably fast connections on the order of 1gbps between servers. Unfortunately, when I use rsync to do so it has all sorts of issues and repeatedly fails to saturate the connection, only using all of it for a fraction of a second at a time and often using none (< 1mbps, possibly zero) for minutes on end. Neither my hard drive nor CPU on either end are pinned at 100% usage, so neither of them can be the bottleneck. SCP behaves in the same way.

Speed tests, however, manage to fully saturate my connection for the duration of the test (much longer than rsync can). I can get some success by using HTTP over Cloudflare with a multithreaded downloader (aria2c), but it still won't saturate my connection — it simply prevents it from going near zero. Interestingly, aria2c also won't saturate my connection when not going through Cloudflare.

In my research before asking this question, it seems like TCP window may have something to do with this, but I don't have a proper understand of what that is nor do I know how to change it (the only information that I can find on the Internet is in the context of speed tests, where the speed test has a command line flag, but neither rsync nor SCP have any such flags documented in their man pages). This isn't useful, however, as I have no idea how to use this knowledge to fix my download speed.

As such, I ask, how can I easily (e.g. without mucking around with custom compilation, or paying for proprietary products), like I can with rsync, scp, or http (via nginx) now, saturate my fast connections for bulk file transfers? An ideal answer will give me a set of rsync flags that I can use, in order to fully solve this problem. A perhaps more realistic answer that would still be useful would be a particular standard Linux utility that can accomplish what I want without significant tweaking. I am sure that such a thing exists, as I am sure that I am not the only one who needs to transfer large files over the Internet.


Solution 1:

Using lftp I was able to push my connection to its full capacity (900Mbps), you need something like this - lftp -e 'mirror --parallel=10 --use-pget-n=10 /var/vtemp /var/vtemp' sftp://[email protected]. The only problem I faced was that throughput degraded after running for a few hours.