Jumbo frames between KVM guest and host?
I am trying to implement a 9000 byte MTU for storage communication between KVM guests and the host system. The host has a bridge (br1
) with a 9000 byte MTU:
host# ip link show br1
8: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP
link/ether fe:54:00:50:f3:55 brd ff:ff:ff:ff:ff:ff
inet 172.16.64.1/24 brd 172.16.64.255 scope global br1
inet6 fe80::21b:21ff:fe0e:ee39/64 scope link
valid_lft forever preferred_lft forever
The guests have an interface attached to this bridge that also has a 9000 byte MTU:
guest# ip addr show eth2
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:50:f3:55 brd ff:ff:ff:ff:ff:ff
inet 172.16.64.10/24 brd 172.16.64.255 scope global eth2
inet6 fe80::5054:ff:fe50:f355/64 scope link
valid_lft forever preferred_lft forever
I can ping from the host to the guest:
host# ping -c4 172.16.64.10
PING 172.16.64.10 (172.16.64.10) 56(84) bytes of data.
64 bytes from 172.16.64.10: icmp_seq=1 ttl=64 time=1.15 ms
64 bytes from 172.16.64.10: icmp_seq=2 ttl=64 time=0.558 ms
64 bytes from 172.16.64.10: icmp_seq=3 ttl=64 time=0.566 ms
64 bytes from 172.16.64.10: icmp_seq=4 ttl=64 time=0.631 ms
--- 172.16.64.10 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.558/0.727/1.153/0.247 ms
But if I increase the ping packet size beyond 1490 bytes, I no longer have connectivity:
host# ping -c4 -s 1491 172.16.64.10
PING 172.16.64.10 (172.16.64.10) 1491(1519) bytes of data.
--- 172.16.64.10 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3000ms
A packet trace shows that these packets never reach the guest. Everything I've read indicates that both the Linux bridge interface and the virtio
network drives all support jumbo frames, but this sure looks like an MTU problem to me.
Am I missing something really obvious?
Update
Showing the host-side of the guest interface:
host# brctl show
bridge name bridge id STP enabled interfaces
br1 8000.fe540050f355 no vnet2
host# ip addr show vnet2
11: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master br1 state UNKNOWN qlen 500
link/ether fe:54:00:50:f3:55 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe50:f355/64 scope link
valid_lft forever preferred_lft forever
Solution 1:
While this was an MTU problem, it turns out that it had nothing to do with the MTU settings on any of the component devices. As I showed in the original question, the host bridge, host tun interface, and guest interface all had the same MTU setting (9000 bytes).
The actual problem was a libvirt/kvm configuration issue. By default, libvirt does not use virtio
devices. Absent an explicit configuration you end up with a RealTek RTL-8139 NIC. This virtual NIC does not support jumbo frames.
To use virtio
devices, you need to specify an explicit model. When using virt-install
:
virt-install ... -w bridge=br1,model=virtio
Or after the fact by adding a <model>
tag to the appropriate <interface>
element in the domain XML:
<interface type="bridge">
<model type="virtio"/>
<source bridge="br1"/>
<target dev="vnet2"/>
</interface>
With this change in place, everything works as intended.