Relationship between MTU and NFS rsize/wsize options
I'm trying to understand networking settings related to NFS and various buffer sizes (and there are quite a few).
I'm running wireshark and inspecting the TCP packets arriving at the NFS server. Wireshark is showing a packet size max 32626 during an extended write operation (client->server), assuming I am interpreting correctly ("bytes on the wire" which I suppose includes all network-layer headers, etc.)
The "rsize" and "wsize" NFS settings for exported storage is set to 32k on both C/S, so I figured the above results were a result of this setting. However, increasing these values does NOT increase the packet size shown by Wireshark.
So my question is, what other constraints could be in place? I've done a fair amount of research, and this is what I've come across so far. It seems to me that none of the network constraints below would limit transmission size to 32k:
From sysctl:
net.ipv4.tcp_mem = 4096 87380 4194304
net.ipv4.tcp_{r,w}mem = 4096 87380 4194304
net.core.{r,w}mem_max = 131071
net.core.rmem_default = 229376
My MTU is currently 8K
The NFS {r,w}size defined by client mount option and/or server capabilities. IOW, you can define them on command line like:
# mount -o rsize=1048576 .....
Linux client have different default values for v3 and v4 - 32k and 1MB. The nfs server may request a smaller or can support bigger sizes. You should be able to see that with wireshark as FSINFO call for v3 or FATTR4_MAXREAD/FATTR4_MAXWRITE file attributes, which requested with the very first GETATTR call.
The RPC layer may split single read or write requests into multiple RPC fragments. The TCP layer may split single RPC fragment into multiple TCP packets. On an other hand, TCP layer may marge multiple RPC requests into single TCP packet, if they fit.
There is a quite outdated document Optimizing NFS Performance, but will get an idea how to tweak the numbers.