How to enable multipath I/O for NFS-backed VMware ESXi storage?
Right now I have 3 VMware ESXi hosts using 4Gb fibre channel to our NetApp for storage. I'd like to switch to NFS over 10Gb ethernet.
Each ESXi server has two 10Gb ethernet ports and each controller on my NetApp has two 10Gb ethernet ports. The only piece left that I need to get is the ethernet switches.
I'd like to have two ethernet switches for redundancy, so that if one switch dies, storage will still work, identical to the dual-switch fibre channel multipath I/O I have now.
But how do you do the same thing for NFS over ethernet? I know how to handle the ESXi side and the NetApp side of the equation, it's just with the switching side I don't quite know what to do.
I know how to do a LACP trunk/etherchannel bonding, but that doesn't work between physically separate switches.
So, can you recommend a pair of Cisco switches to use for this purpose and which Cisco IOS features I'd use to enable this kind of multipath NFS I/O? I'd like the switch to have at least 12 10Gb ports each. I know these switches will be mega-expensive, that's fine.
My firm just expanded our Cisco 4507 chassis switch by adding and another supervisor engine and 6-port 10GbE line cards to accommodate the storage network (VMWare and NexentaStor/ZFS). I know it's not the multiple-switch arrangement, but was a good way to get the number of ports we needed. Elsewhere in the industry, it seems as though Cisco Nexus and 4900M are popular for the solution you're requesting.