Rough estimate for speed advantage of SAN-via-fibre to san-via-iSCSI when using VMware vSphere

There a lot of determining factors that go into the performance feeling here. One tweak you might consider is setting up Jumbo Frames. Scott Lowe has a recent blog post here that shows some of what he did to achieve this.

You mention that the guests will be running low CPU load - those are always great candidates for virtualization - but the difference between FiberChannel and iSCSI doesn't really come into play yet.

If your vm guests are going to be running storage-intensive operations, then you have to consider that the speed of transferring a read/write operation from the VM Host to the storage array may become your bottleneck.

Since your typical iSCSI transfer rate is 1Gbps (over Ethernet), and FC is usually around 2-4Gbps (depending on how much cash you're willing to spend), then you could state that the transfer speed of FC is roughly twice as fast.

There's also the new 10Gig-E switches, but your Powervault and Powerconnect don't support that yet.

However, that doesn't really mean that the machines will work faster, as if they are running applications with low I/O, they might actually perform at the same speeds.

The debate as to which is better is never ending, and it basically will depend on your own evaluation and results.

We have multiple deployments of FC-based mini-clouds and iSCSI-based mini-clouds, and they both work pretty well. We're finding that the bottleneck is at the storage array level, not iSCSI traffic over 1Gb Ethernet.


You are more likely to be bottlenecking on the number of spindles than the speed of your transport.

That is, yes, the raw speed of FC is faster than iSCSI, but if you are (hypothetically) trying to run 200 VMs off 6 spindles (physical disks), you're going to see worse performance than if you are trying to run 200 VMs off 24 spindles over iSCSI. In our nearly-idle lab environment, we're running at about 2 NFS ops per VM (240ish vs 117), so that might give some notion of how much IO you'll have.

I don't think you'll see much difference based on the transport, unless you know you have very high contiguous IO (heavy instrument data log stream? Video archiving?, I don't know what real-world scenarios might be like this, to be honest).

I really don't think you'd notice the transport unless the IOPS of the disks dramatically outweigh your load. I would go with other criteria to make the decisions (ease of management, cost, etc.)

We went with NFS on a NetApp the last time we added storage.


It's a very noticeable change. Though to be truthful, we were going from Linux Server Based iSCSI (fake iscsi) to fiber, aka a testing environment to production when my last company was rolling out VMware based shared hosting. Our VMware rep stated that fiber is much less overhead when it comes to multiple VM's on a single ESX host needing access to shared storage. I noticed the general usage of a Win2k3 VM instance doubled in performance, but disk IO, which I tested using hdtune on the VM was faster then our Dell 2850's standard IO (3 x 73GB in RAID 5 on a perc 4 if memory serves me). Granted we were running maybe 5 or so VM's on each ESX host with low usage, as we were being trained up on it.

Your VMware rep should have plenty of documentation on Fiber vs. iscsi including some overall benchmarks, or at least real world implementation stories/comparisons. Our rep sure did.


I know the issue has been resolved, but I suggest you have a look at this article about FC vs iSCSI.