Why NIC ring parameters are not pre-set at their Hardware max capabilities?

Solution 1:

First, numbers you set aren't in bytes as many people think, they are in descriptors (and the descriptor size is hardware-dependent). So, when you increase ring length you request more memory to be allocated in kernel for these descriptors. In general, you want this kernel memory in L1 cache so that interrupt processing will be as fast as possible. Increasing ring size makes this less probable and in some cases completely impossible.

Next thing is interrupt coalescing - in general, when you increase ring buffer size, the NIC will adjust its low/high marks appropriately and will trigger interrupt when more data is buffered (that is less often). Time required by kernel to process these larger amounts of data during interrupt processing would also increase as the result.

All of the above results in a simple bucket effect - with a larger ring packet drop probability decreases and network latency increases. This may be perfectly fine if you're streaming large files over TCP and may be completely undesirable if you're a low-latency small packet application (i.e. gaming and such).

Default numbers you see are reasonable trade-off between the two.