Network bonding mode 802.3ad on Ubuntu 12.04 and a Cisco Switch
You'll never get more than 1 NIC's performance between two servers. Switches do not spread the frames from a single source across multiple links in a Link Aggregation Group (LAG). What they actually do is hash the source MAC or IP (or both) and use that hash to assign the client to one NIC.
So your server can transmit across as many NIC's as you want, but those frames will all be sent to the destination server on one link.
To test LAGs use multiple threads so they use multiple links. Using netperf try:
netperf -H ipaddress &
netperf -H ipaddress &
netperf -H ipaddress &
netperf -H ipaddress &
netperf -H ipaddress &
You should see some of the traffic hitting the other slaves in the bond.
I have four 10GbE ports in a LACP bond and I am getting 32Gb to 36Gb each way between the two servers.
The other way is to setup aliases on the bond with multiple IP addresses and then launch multiple netperf instances to the different addresses.
Your server with the Intel Xeon processors X5690 has more then enough power to drive close to 10Gb per core.
I have driven 80Gb uni-directional traffic across 8x1GbE ports. The key is using l3+l4 hashing on both the switch and NICs and to use multiple threads.
Here is an example of my 4x10GbE configuration... My interface config file:
#Ports that will be used for VXLAN Traffic in on Bond0
auto p4p1
auto p4p2
auto p6p1
auto p6p2
iface p4p1 inet manual
bond-master bond0
iface p4p2 inet manual
bond-master bond0
iface p6p1 inet manual
bond-master bond0
iface p6p2 inet manual
bond-master bond0
#Configure Bond0. Setup script will provide VXLAN VLAN configuration on bond0
auto bond0
iface bond0 inet manual
#address 10.3.100.60
#netmask 255.255.0.0
bond-mode 4
bond-slaves none
bond-lacp-rate 0
bond-ad-select 1
bond-miimon 100
bond-xmit_hash_policy 1
cat /proc/net/bonding/bond0
root@host2:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer3+4 (1)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): bandwidth
Active Aggregator Info:
Aggregator ID: 2
Number of ports: 4
Actor Key: 33
Partner Key: 32768
Partner Mac Address: 54:7f:ee:e3:01:41
Slave Interface: p6p1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 90:e2:ba:47:2b:e4
Aggregator ID: 2
Slave queue ID: 0
Slave Interface: p4p2
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 90:e2:ba:47:2b:69
Aggregator ID: 2
Slave queue ID: 0
Slave Interface: p4p1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 90:e2:ba:47:2b:68
Aggregator ID: 2
Slave queue ID: 0
Slave Interface: p6p2
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 90:e2:ba:47:2b:e5
Aggregator ID: 2
Slave queue ID: 0
Here is the result of running multiple instances of netperf:
root@host6:~# vnstat -i bond0.192 -l
Monitoring bond0.192... (press CTRL-C to stop)
rx: 36.83 Gbit/s 353202 p/s tx: 162.40 Mbit/s 314535 p/s
bond0.192 / traffic statistics
rx | tx
--------------------------------------+------------------
bytes 499.57 GiB | 2.15 GiB
--------------------------------------+------------------
max 36.90 Gbit/s | 170.52 Mbit/s
average 20.05 Gbit/s | 86.38 Mbit/s
min 0 kbit/s | 0 kbit/s
--------------------------------------+------------------
packets 39060415 | 34965195
--------------------------------------+------------------
max 369770 p/s | 330146 p/s
average 186891 p/s | 167297 p/s
min 0 p/s | 0 p/s
--------------------------------------+------------------
time 3.48 minutes
Hope this helps...