Solution 1:

Can you expand a bit on your iSCSI architecture? How many initiator/target addresses are you working with, how many physical switches, all one subnet or multiple?

The basic answer is: because MPIO manages end-to-end connectivity paths, and is better at storage connectivity load balancing and connection resilience than generic network redundancy and load balancing mechanisms.

The specific technical reasons for this depend on the architecture, so I can be a lot more specific if you provide additional detail on your iSCSI network's setup. A few general examples:

  • Without any MPIO, your initiator-to-target IP conversation is just a single conversation. 802.3ad mandates that the order of packets in a conversation not be changed (and you wouldn't want your iSCSI traffic out-of-order anyway), so you're limited to the bandwidth of a single link.
  • MPIO detects and handles path failures, whereas 802.3ad can only compensate for a link failure - and only if that link failure is correctly detected. If your NIC card hangs but still reports good link, or your switch configuration gets screwed up for a specific port, you will likely lose storage connectivity despite having a second link that's still working.
  • You're tied to a single physical switch, instead of being able to uplink your host's NICs to different switches.