nic bonding very slow

I have 2 dl380 g9 servers which feature 2 nics with 4 ports each running debian buster. I have connected 7x 0.50m cat6e patch cables directly server-to-server and set the /etc/network/interfaces on both servers(with minor difference in the ip addresses) as:

auto lo
iface lo inet loopback

iface eno1 inet manual

auto eno2
iface eno2 inet manual

auto eno3
iface eno3 inet manual

auto eno4
iface eno4 inet manual

auto eno49
iface eno49 inet manual

auto eno50
iface eno50 inet manual

auto eno51
iface eno51 inet manual

auto eno52
iface eno52 inet manual

auto bond0
iface bond0 inet static
        address 10.10.10.11/24
        bond-slaves eno2 eno3 eno4 eno49 eno50 eno51 eno52
        bond-miimon 100
        bond-mode balance-rr

auto vmbr0
iface vmbr0 inet static
        address 10.0.0.234/16
        gateway 10.0.0.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

the plan was to use the bonded connection for zfs replication between the nodes. The issue is that the thourhgput i get, is limited to 2.25gbps(value taken from nload). the wierd part of the situation is that if i bond 3 ports insetad of 7.......i get again 2.25gbps. it's like the 4 ports are not utilised. any idea how can i diagnose the issue?


Don't use round-robin: bond-mode balance-rr

This mode will cause a lot of out-of-order TCP traffic and limit throughput like you are experiencing.

Change the bond to bond-mode balance-xor which balances a single flow out each member of the bond. So a single stream will go at 1x NIC speed (1 Gbps) but you can run many flows and achieve full speed of all NICs (7 Gbps).

If the traffic is within the same subnet, then the default bond-xmit-hash-policy layer2 may be fine, this performs the load balancing based on MAC addresses.

if all your traffic goes through a default gateway, then look at setting bond-xmit-hash-policy layer2+3 or bond-xmit-hash-policy layer3+4 which balance on IP address and TCP/UDP port respectively. This will allow you to balance flows to multiple hosts, or multiple sessions to the same host, or just improve the balancing algorithm.

I presume your switch is correctly configured with a port-channel or EtherChannel or other sort of Link Aggregation Group. This is needed for your existing balance-rr mode anyway. The switch will have its own load balancing policy back in (similar to layer 2/3/4 above), so make sure the switch is configured with a useful policy too.

Make sure you are running irqbalance so multiple CPUs can receive traffic streams at the same time. If you don't spread IRQs, then all traffic will be handled by CPU core 0 which becomes a bottleneck.

However, if your aim is to achieve a single 7Gbps TCP stream, this is not a feature that either bonding or teaming offer, and is not how link aggregation works. If you want one single faster stream then upgrade to 10 Gbps or faster.


Network interface bonding is not exactly like channel aggregation in switches. Switch port is simple and all ports are connected to same fabric, but computer interfaces are complicated. Some network exchange processes are offloaded to network cards and if they are not offloaded they will take CPU time.

It cause lot of problems - for instance you can offload TCP functions to network card but then you can't throw packets through other card on this connection.

That means that anytime you use bonding you always should plan it well and thoroughly test. You also have to find correct solution for your particular tasks.

  1. You should try different bond-modes. We use balance-tcp here, this limits using one interface per connection, but allow to offload any possible functions to NIC. This will work if you have many tcp connections.

  2. You may want to try teaming instead of bonding. This is actually new word in bonding with many new features and lower overhead.

  3. Each time you change setup you should test it in different ways - big blocks, small blocks, one connection, many connections, lot of connections. Look at fio(1). Do not forget to monitor CPU usage in all cases.

  4. HP dl380g9 has great optional 10G mezzonine cards and if I were there I would think about buying 10G cards rather than messing up with bonding. And be aware of not using copper ethernet on those, only fiber or direct cables.