iSCSI design options for 10GbE VMware distributed switches? MPIO vs. LACP

I'm working on expanding the storage backend for several VMware vSphere 5.5 and 6.0 clusters at my datacenter. I've primarily used NFS datastores throughout my VMware experience (Solaris ZFS, Isilon, VNX, Linux ZFS), and may introduce a Nimble iSCSI array into the environment, as well as a possible Tegile (ZFS) Hybrid array.

The current storage solutions are Nexenta ZFS and Linux ZFS-based arrays, which provide NFS mounts to the vSphere hosts. The networking connectivity is delivered via 2 x 10GbE LACP trunks on the storage heads and 2 x 10GbE on each ESXi host. The switches are dual Arista 7050S-52 top-of-rack units configured as MLAG peers.

On the vSphere side, I'm using vSphere Distributed Switches (vDS) configured with LACP bonds on the 2 x 10GbE uplinks and Network I/O Control (NIOC) apportioning shares for the VM portgroup, NFS, vMotion and management traffic.

enter image description here

This solution and design approach has worked wonderfully for years, but adding iSCSI block storage is a big shift for me. I'll still need to retain the NFS infrastructure for the foreseeable future.

I would like to understand how I can integrate iSCSI into this environment without changing my physical design. The MLAG on the ToR switches is extremely important to me.

  • For NFS-based storage, LACP is the commonly accepted means of providing path redundancy.
  • For iSCSI, LACP is usually frowned upon, with MPIO multipath designs being the recommended approach.
  • I'm using 10GbE everywhere and would like to keep the simple two-port links to each of the servers. This is for cabling and design simplicity.

Given the above, how can I make the most of an iSCSI solution?

  • Configure iSCSI over LACP?
  • Create VMkernel iSCSI adapters on the vDS and try to bind them to separate uplinks to achieve some sort of mutant MPIO?
  • Add more network adapters?

Solution 1:

I wouldn't recommend running iSCSI over LACP as there really is no benefit to it over basic link redundancy.

Creating VMkernel switches for iSCSI on your vDS with software iSCSI HBA is exactly what you should do. This will give you true MPIO. This blog post seems somewhat relevant to what you are trying to do ignoring the need to migrate from standard switches: https://itvlab.wordpress.com/2015/02/14/how-to-migrate-iscsi-storage-from-a-standard-switch-to-a-distributed-switch/

You should not need to add more network adapters if you already have two for iSCSI. I would however recommend that you enable jumbo frames (MTU 9000) on your iSCSI network. This has to be set at all levels of the network such as VMkernel, vDS, physical switches, and SAN appliances.