Linux bond mode 4 (802.3ad) - 2 switch - 4 NIC
I know that you can use bonding mode 4 with 1 servers with 2 nic using 2 switch.
Bond 0 made of : Nic 1 port 1 -> switch A Nic 2 port 1 -> switch B
In this case I can loose a switch or a nic or a cable and still have my network working, if everything is working I will have link aggregation on the top of high availability .
My question is can you do the same but with 4 NIC to have more speed and still play it safe.
Bond 0 made of : Nic 1 port 1 -> switch A Nic 1 port 2 -> switch B Nic 2 port 1 -> switch A Nic 2 port 2 -> switch B
The switch will probably be CISCO.
Cheers
Solution 1:
You can actually configure an LACP bond to two separate switches.
Say you have the following:
+------+ +-----+
| eth0 =-----= sw1 |
| eth1 =-----= |
| | +-----+
| | +-----+
| eth2 =-----= sw2 |
| eth3 =-----= |
+------+ +-----+
With all ethX
interfaces in bond0
, and each switch with a separate active LACP port-channel.
The bond will work fine, and will recognize two different Aggregator IDs, however only one Aggregator be can active at once so only one switch will be used at any time.
So the bond comes up and has two Aggregators, one to sw1 and one to sw2. The first Aggregator is active by default, so all traffic will be between eth0/eth1 and sw1. eth2/eth3 and sw2 remain as idle standby.
Say sw1's port 1 failed, so the Aggregator to sw1 only has one port active. sw1 will continue to be the active Aggregator. However, you can make it fail over to sw2 with the ad_select=bandwidth
(whichever Agg has most bandwidth) or ad_select=count
(whichever Agg has most slaves) bonding module parameter.
Say sw1 failed altogether, then that Aggregator will go down, and sw2 will take over.
Solution 2:
I just finished configuring exactly the same setup on Ubuntu server 14.04 LTS.
Procedure should be identical for any Linux distro who configures networking through the interfaces file. (E.g. Debian and most of it's derivatives like Ubuntu and Mint.)
On each switch:
Configure both ports in a 802.3ad ether-channel. There is no need for a channel definition linking both switches. The channels should be defined on each switch individually.
On the server:
First install package "ifenslave-2.6" through your package manager.
Then edit /etc/modules and add an extra line with the word "bonding" to it.
E.g.:
# /etc/modules: kernel modules to load at boot time
loop
lp
rtc
bonding
Run "modprobe bonding" once to load the bonding module right now.
Then edit /etc/network/interfaces to define the real NIC's as manual interfaces being slaves of the new interface "bond0".
E.g.:
# The loopback interface
auto lo
iface lo inet loopback
# The individual interfaces
auto eth0
iface eth0 inet manual
bond-master bond0
auto eth1
iface eth1 inet manual
bond-master bond0
auto eth2
iface eth2 inet manual
bond-master bond0
auto eth3
iface eth3 inet manual
bond-master bond0
# The bond interface
auto bond0
iface bond0 inet static
address 192.168.1.200
gateway 192.168.1.1
netmask 255.255.255.0
bond-mode 4
bond-miimon 100
bond-slaves eth0 eth1 eth2 eth3
bond-ad_select bandwidth
The last statement insures that whichever of the 2 pairs has full connectivity gets all traffic when just 1 interface goes down.
So if eth0 and eth1 connect to switch A and eth2-eth3 go to switch B the connection will use switch B if either eth0 or eth1 goes down.
Last but not least:
ifup eth0 & ifup eth1 & ifup eth2 & ifup eth3 & ifup bond0
That's it. It works and will automatically come back online after a reboot.
You can observe failover behavior by bringing down individual ethX interfaces with ifdown and observe the resulting aggregated bandwidth through "ethtool bond0".
(No need to go to the server-room and yank cables.)