What are possible downsides to setting a (very) large initcwnd for high-bandwidth connections?

I have been experimenting with the TCP parameters in Linux (with a 3.5 kernel). Basically concerning this connection:

Server: Gigabit uplink in datacenter, actual bandwidth (due to sharing uplinks) is around 70 MB/s when tested from another datacenter.

Client: Gigabit local lan connected to 200mbit fiber. Fetching a test-file actually achieves 20 MB/s.

Latency: About 50ms roundtrip.

The remote server is used as a fileserver for files in the range of 10 to 100mb. I noticed that using an initcwnd of 10 the transfer-time for these files is heavily affected by TCP slow-start, taking 3.5 seconds to load 10mb (top speed reached: 3.3 MB/s) because it starts slow and then ramps up however it is finished before the maximum speed is reached. My goal is to tune for minimum load-times of those files (so not highest raw throughput or lowest roundtrip latency, I'm willing to sacrifice both if that decreases the actual time it takes to load a file)

So I tried a simple calculation to determine what the ideal initcwnd should be, ignoring any other connections and possible impact on others. The bandwidth-delay-product is 200 Mbit/s * 50ms = 10 Mbit or 1.310.720 bytes. Considering that the initcwnd is set in units of MSS and assuming the MSS is around 1400 bytes this will require a setting of: 1.310.720 / 1400 = 936

This value is very far from the default (10*MSS in Linux, 64kb in Windows), so it doesn't feel like a good idea to set it like this. What are the expected downsides of configuring it like this? E.g:

  • Will it affect other users of the same network?
  • Could it create unacceptable congestion for other connections?
  • Flood router-buffers somewhere on the path?
  • Increase the impact of small amounts of packet-loss?

What are the expected downsides of configuring it like this? E.g:

Will it affect other users of the same network?

Changing the initcwnd will affect:

  • Users of the server with the settings change
  • IF those users match the route the settings change is configured on.
Could it create unacceptable congestion for other connections?

Sure.

Flood router-buffers somewhere on the path?

Not irrelevant, but unless they are your routers, I'd focus on the issues that are closer to you.

Increase the impact of small amounts of packet-loss?

Sure, it can do this.

The upshot is that this will increase the cost of packet loss, both intentional and unintentional. Your server is simpler to DOS by anyone capable of completing the 3-way handshake (significant amounts of data out for low investment (amount of data) in).

It will also increase the chances that a bunch of those packets will need to be retransmitted because one of the first packets in the burst will get lost.