What's the best MTU setting for a web server?

I manage several web servers and their MTU settings were set at 1500 by default. Some of the machines have an MTU setting of 576. I've read a bunch about MTU and I feel I understand it well enough but I don't have a good sense for the current state of hardware between my router and my user's PCs. Is 1500 a sane MTU to run on a public web server or will I be causing problems for some users? thanks! fz


Solution 1:

The short answer: "It depends".

The longer answer:

normally the TCP implements path MTU discovery algorithm (RFC1191).

However, the problem is that the middleboxes sometimes drop the ICMP messages "Fragmentation Needed, DF bit set" that are used for this. That said, it is usually infrequent - so you should be ok with the default MTU of 1500 - assuming your side does not filter ICMP unnecessarily, the failures to connect will be caused by the misconfigurations on the client side.

the MTU of 576 is assumed if the host does not implement the Path MTU discovery (if you explicitly disable it). This will cause lower performance for the connections, so better do not touch it.

So, normally 1500 should be a reasonable MTU to run in the vast majority of the cases - unless a very specific case where the firewall administrator on your side is overzealously blocking ICMP and the remote client is connecting via a path with a link inbetween that has a smaller MTU. But still, this case is best fixed by reconfiguring the firewall, so again 1500 should be fine.

Solution 2:

Sure, 1500 is a sane value.

The best value really requires you to understand a lot of the hardware and protocols you use and how they work. 1500 is a very reasonable starting place.

I haven't used it myself, but the SG TCP Optimizer has been recommend to help you investigate your system.

That all said, the answer "it depends" is very right, and if I was trying to optimize my web server's speed I would start with something I have better control over.