Recommended switch for an iSCSI implementation [closed]
Solution 1:
I agree completely with ynguldyn's answer - for the most part any modern switch intended for use in a server-room\data center should be more than good enough for your needs and keeping things consistent in your environment is likely to be more important to you from a support\manageability perspective.
That said if you really want to get the most out of your iSCSI set up use switches that have:
Sufficient per port buffer memory. Ideally something >512k per port but there is a trade-off here. Some switches use larger buffers to mask poor switching speed so you need to look for more than this. Too little buffer memory will lead to packet loss under heavy load and the TCP layer will have to resend packets which will slow everything down dramatically.
Sufficient per port processing capability. This can be hard to establish - the best metric to look for is switching speed. A switch with 100microsec switching speed can only handle 10k packets/sec which would not be able to switch GigE at line speed, one with a 3microsec switching speed can (in theory) handle up to 300k packets/sec, which is fine. Anything below 12microsec is likely to be good enough. Faster is better but prices go up quite dramatically as that number heads for low single digits.
Support for hardware flow control (802.3x). This will be useless if your server NICs and array don't also support this but if they do it allows your iSCSI network to handle flow control much more efficiently at layer 2 rather than rely on higher level congestion control such as TCP's congestion avoidance algorithms which will be significantly less efficient. That said it's hard to find a proper switch that doesn't support it today.
Support for Jumbo frames. Again this is only going to be beneficial if your iSCSI array, server hardware, and OS also support jumbo frames. At the most basic level Jumbo frames decreases protocol overhead and can push throughput up by 10-20% but those gains depend highly on traffic patterns. For extended high bandwidth data transfers 9k Jumbo frames will reduce the CPU overhead on your array, servers (and switch) by up to 80% as well. This may or may not be significant in your environment as the initial CPU overhead may be relatively low. Low end switches sometimes claim Jumbo frame support but don't support 9k Jumbo frames which is the generally agreed optimal size for GigE so check that first. If your array doesn't support Jumbo frames there's no need to worry about this, obviously.
High bandwidth switching and stacking capability. For GigE you should be aiming to be >1Gbps per port, ideally 2Gbps to handle full duplex traffic at line speed across all ports. For a 24 port switch you want it to be able to switch 48Gbps internally and be able to be stacked\uplinked at a significant percentage of that if you are using multiple switches. For some iSCSI architectures (e.g. HP Lefthand and Dell Equallogic) you need to support very high bandwidth traffic between all ports on all arrays and the aggregate switching speed becomes very important. For switches that support mixed 1GigE and 10GigE adjust accordingly the total switching bandwidth should cover all ports running at full speed in full duplex mode.
Spannning Tree. You want to be able to disable it completely if your iSCSI environment is simple enough and isolated from everything else or have it support Rapid Spanning Tree\Port Fast\Edge Ports where you can selectively disable full spanning tree behaviour on specific ports.
Solution 2:
GigE is an old and stable technology, and the processing power of modern switches is sufficient to handle it very easily, especially when it's just one target and a handful of initiators. You should expect any decent switch ($20 little-boxes-that-developers-hide-under-their-desks-to-annoy-sysadmins of course excluded) to have no timing or performance issues in the SAN environment. Relevant feature sets are also pretty much the same across all of them, with jumbo frames, flow control, VLANs, and everything else you might need.
Instead, you should focus on the budget, existing vendor relationships, installed hardware and in-house expertise: get what you can afford and what you know best, and stick to the same brand you already use (two reasons: fewer manuals to read means deeper knowledge of what you have, and you'll avoid interoperability issues). Cisco, ProCurve, Nortel, high-end Netgears should all be good.