EMC VNX iSCSI setup - unsure about SP/port assignment
We have a new VNX5300 waiting to get configured, and I need to plan out the network infrastructure before the EMC tech arrives. It has 4x1gbit iSCSI per SP (8 ports in total), and I'd like to get the most out of the performance until we jump over to 10gig iSCSI.
From what I can read from the docs - the recommendation is to use only two ports per SP, with 1 active and 1 passive. Why is this? It seems kind of pointless to have quad-port i/o-modules and then recommend to not use more than two of them?
Also - I'm a bit unsure about the zoning. The best practices guide state that you should separate each port on each SP from each other on different logical networks. Does this mean that I have to create 4 logical networks to be able to use all 8 ports?
It also gives the following example:
Does this mean that A0 and B0 should sit on the same physical switch aswell? Won't this make all traffic go on one switch (if both A1 and B1 are passive)?
Edit: Another brainpuzzle
I don't get it - each host (as in server) should not have more iSCSI bandwidth available than the storage processor. What on earth does this matter? If serverA have 1gbit and serverB have 100mbit, then the resulting bandwith between them is 100mbit. How can this result in some kind of oversubscription?
Edit4: Wait, what. Active and passive ports? The VNX runs in a ALUA configuration with asymmetrical active/active.. there shouldn't be any passive ports, only preferred ones..
What EMC's documents seem to be discussing is to have two separate IP broadcast domains - two separate fabrics on separate hardware, so that a misconfig in a given switch or a switching loop or somesuch doesn't bring down all storage connectivity.
Along these lines:
I personally think it's a little nuts to keep creating additional fabrics for each port per SP, though - I'd say just split them up evenly among the storage fabrics; SP A's other two ports would be 10.168.10.9 for the one plugged to fabric 1, and 10.168.11.9 for the one plugged to fabric 2.
The client's multipathing should be the one handling all load balancing and failover. And how the heck are you supposed to put a client with two HBAs into 4 vlans, anyway? They can handle two targets visible from a given initiator just fine.
(no idea on the "oversubscription" thing.)