What to look for in a switch with LAN/WAN verses an iSCSI SAN?

I'm setting up a VMWare ESXi 5 environment with 3 server nodes. Dell recommended 2x Force10 S60 switches shared (iSCSI SAN, LAN/WAN). The S60 switches are extremely powerful. They have 1.25 GB of buffer cache, < 9us latency. But they are very expensive (online price ~$15k per switch, actual quote a little less).

I've been told that "by the book" you should at least have 2 internal switches for SAN, and 2 switches for LAN/WAN (each with a redundant). I know some of the pros and cons of each approach. What I'm wondering is, would it be more cost effective to disjoin the SAN from LAN with less expensive switches?

The answer to this question highlights what I should be looking for in a switch for the SAN. What should I be looking for in a LAN/WAN switch, in comparison to the SAN?

With the above linked question for the SAN:

  • How is buffer latency measured? When you see 36 MB of buffer cache, is that shared or per port? So 36 MB would be 768kb or 36MB per port?
  • With 3 to 6 servers how much buffer cache do you really need?
  • What else should I be looking at?

Our application will be heavily using HTML5 websockets (high number of persistent connections). The amount of data being sent is small; Data sent between client <-> server isn't broadcasted (not a chat/IM service). We will be doing some database reporting too (csv export, sums, some joins).

We are a small business and on a budget. We'd probably only be able to spend no more than $20k on switches total (2 or 4).


As a best practice, yes, your SAN and LAN ought to be physically separate.

That said, like all things, it comes down to what problems you're trying to solve, your performance needs, your sensitivity to transient storage slowness (if you experience port or backplane contention), and the amount of money you have to throw at the project.

I know many businesses that run converged SAN and data networks, and they have great luck. I know equally as many that maintain two physically separate networks as well.

What's best for your situation depends on the above factors.


Some best practices are to run them separately, however, in doing so you lose out on the benefits of having a converged network. This can be important when you have a large environment and oodles of 10Gb ports.

However, your environment is a very small one, and I think Dell is trying to oversell you on network hardware, and their own iscsi hardware.

You can purchase a switch with multiple heads that is functionally equivalent to having 2 switches. Also, you can easily look at FC instead of iscsi, and maybe compare NFS too, along with infiniband. You can also use something like infiniband virtualization e.g. Xsigo?

On the NAS/SAN side, i would not be so tied to Dell, but might instead go with a best of breed product line, including things like Netapp and other competitors.

Questions I would ask:

How easy is it for me to find talent for this configuration?

How close to industry standard is this hardware?

What are my out year costs of this solution going to be? (TCO)

How expandable is this solution?

Does this solution miss any nice to haves?

Is the vendor trying to oversell me on a specific solution?

Do I adequately understand the problem space, and do I know (reasonably) a good number of alternative solutions?

Can I use one vendor quote to get price concessions from vendor 2?

How remotely manageable and monitorable is this solution?

How well does the entire stack integrate?

What is the cost per minute of an outage and how does that compare to extra hardware?

Can I mitigate risks another way?

Might I have better advantages by going with a cloud stack from a vendor in a regulated environment and trading off higher operating costs for less capital investment?

Where is my application aware security?

How easy is it to secure this infrastructure?

Am I attempting to optimize the solution prematurely?

Have I performed sufficient performance analysis and benchmarking to know what my true performance requirements are?

How does this system fail over and to whom? (HA and Vmotion, among others)?

Do I have single points of failure?

Have I received quotes for both integrated stacks and best of breed stacks, from at least 3 vendors apiece (6 vendors total)?

Can I go with a different model altogether, perhaps using a blade enclosure with blades, or virtualized i/o over a higher speed network (Xsigo)?

Can I use virtualized switches (e.g. Cisco 1000Vs and their competitors) instead of physical switches?

One other thing I would add is that several vendors are now selling pre-engineered solutions like the Cisco/Vmware/Netapp partnership with Flexpods, or what would be a competing fully one integrator solution such as HP's VirtualSystem. I'd ensure that these vendors know about what your goals are, and they get working with their own virtualization specialists to create a solution that meets your requirements.

You are able to use demo model solutions from these vendors without buying anything (for a limited time) then make a selection based on whichever best meets your needs. Head to head competition - always a win :)


Use Cisco 3750x switches, buy two of them and vlan the ports off for what you need. Two 48 port switches should do you fine. I just setup our iscsi setup and this is how we did it. Works fine. The san plugs into the switches and the servers plug into the switches. I think we needed 6 cables for each server because of all the heart beats, but it works mint!