Bonding 2 or more Gigabyte NIC together to get 2Gbps performance between 1 server and 1 client?

Solution 1:

I've setup a lab with 2 servers each one with 2 Gbit NICs connected back-to-back by 2 CAT5e cables. Using Debian 5.0.5 freshly installed on both servers I configured a bonding-master interface bond0 with eth0 and eth1 on both machines using bond-mode 0 (balance-rr) since there's no need to have anything more complex than this really.

The configs (/etc/network/interfaces) look somewhat like this:

iface bond0 inet static
    address 192.168.1.1
    netmask 255.255.255.0
    slaves eth0 eth1
    bond_mode balance-rr
    bond_miimon 100
    bond_downdelay 200
    bond_updelay 200

I installed Apache on one of the servers and downloaded a file from that Apache on the other machine. I was not able to achieve any speed > 1Gbit/s but my guess is that was because of I/O bottlenecks. I can, however, see traffic flowing on both physical interfaces so I'd say what you want is possible.

Let me know how it turns out then :)

Hope this helps!

Solution 2:

This can be done with most NIC's but you also need a Switch that supports this. Most managed switches can do this just fine but unmanaged switches won't be able to do this very well.

Make sure your servers can handle the bandiwdth before spending money, a single cheap hard drive won't be able to handle 2Gbps for the most part. a Nice big fat disk array is a different matter though.

Solution 3:

It's certainly possible to do this with a switch, not sure about doing it directly between computers because I've never tried.

As for whether or not it is worth it, that will depend on the quality of the NICs used and the speed of the internal bus they are plugged into, and as noted in Luma's reply, the speed of the disks being used. It really is a case of try it and see, I'm afraid.