Speed up SFTP uploads on high latency network?

I'm trying to transfer a set of large files internationally using SFTP, but I am finding my international partner can't get upload speeds above ~50k despite very good connections on either side. We can get multiple connections uploading at this speed (so not bandwidth?), but no single upload improves in speed, which is a problem as many files are several gb in size.

The SFTP is being hosted using the standard Apple OSX "Remote Login" SFTP system.

Is there a way to improve upload speeds, or is there a different SFTP host that would help? It's not clear to me if this is a configuration problem or an inherent limitation of the protocol.

(For security reasons I need to be using an end-to-end encrypted peer-to-peer connection -- no cloud services).


With OpenSSH sftp client (which you seem to use), you can use:

  • -R switch to increase request queue length (default is 64)
  • -B switch to increase read/write request size (default is 32 KB)

For a start, try to double both:

sftp -R 128 -B 65536 user@host

It probably does not matter much, which of them you increase.

Increasing either should help to saturate your high-latency connection. With the above settings, it will keep 8 MB worth of data flowing in the pipe at any time (128*64K=8M).

Note that this helps with big file transfers only. It won't have any effect, when transferring a lot of small files.


For some background and a discussion about other (GUI) SFTP clients, see the "Network delay/latency" section of my answer to Why is FileZilla SFTP file transfer max capped at 1.3MiB/sec instead of saturating available bandwidth? rsync and WinSCP are even slower.


(You mention "high latency" in the question title, but not in the body text. Have you measured the actual latency, and what are the results?)

There's a patch to OpenSSH that explicitly improve throughput on a high-latency network link: HPN-SSH: (emphasis mine)

SCP and the underlying SSH2 protocol implementation in OpenSSH is network performance limited by statically defined internal flow control buffers. These buffers often end up acting as a bottleneck for network throughput of SCP, especially on long and high bandwith network links. Modifying the ssh code to allow the buffers to be defined at run time eliminates this bottleneck. We have created a patch that will remove the bottlenecks in OpenSSH and is fully interoperable with other servers and clients. In addition HPN clients will be able to download faster from non HPN servers, and HPN servers will be able to receive uploads faster from non HPN clients.

So, try to compile and use HPN-SSH on the receiving side, and see whether it improves your transfer speed.


I'm trying to transfer a set of large files internationally using SFTP

It hasn't been mentioned as an answer yet, but when transferring multiple files over a high-latency link, there's one really simple solution to get better performance:

Transfer multiple files in parallel.

And it is a solution that you even mentioned in your question. Use it.

Basically, the TCP protocol doesn't handle connections with a large bandwidth-delay product very well - a single connection can't keep enough data moving at any one time. See https://en.wikipedia.org/wiki/TCP_tuning

Since each connection is limited by the TCP protocol, just use more connections.