Which storage protocol to use for ESX storage?
What storage connection method should one prefer to use for connecting ESX servers to a shared storage server with 10GbE links?
Speficically, I have 2 ESX servers for VMware ESX and one server for shared storage.
The storage server is 2 x Xeon E5504 2GHz, 24GB RAM, 12xSSD + 12xSATA and battery backed RAID. The ESX servers are much the same but with 2 small SAS drives.
All servers have 10GbE adapters connected like so:
I have a licence for ESX 3.5 but for testing purposes I am currently running ESXi 4.1. The storage server runs Windows 7 for testing purposes.
I am aware about at least 3 methods:
1. ISCSI
2. NFS
3. FCoE
Which one would you recommend to choose and why?
Solution 1:
No 'if's, no 'but's - if you have the option of using 10Gbps FCoE and your configuration has proven stable then it's the best and only way to go.
It's still quite new but the efficiencies are overwhelming in comparison to iSCSI, and NFS is just plain 'different'.
Be aware however that you should be right up to date with ESX/ESXi 4.1U1 for best FCoE performance/stability and the list of supported 10Gb NICs/CNAs is quite limited but other than Infiniband systems I've never seen shared performance like it. I'm currently moving all of my FC to FCoE, though this won't be complete for over a year due to the volumes involved.
Solution 2:
NFS - is file level storage and is the slowest - routable
FCoE - best performance but only if you use it locally into
a stub network (is not routable)
ISCSI - very good performance but adds a bit of complexity -
on the flip side is routable
Solution 3:
If your goal is ease of use, you may want to consider NFS. It has a mediocre performance overhead (-~5% overall throughput, +~20% storage-related CPU) when compared to the FC.
Here's a comparison of NFS vs iSCSI vs FC in a 4Gb and 10Gb environment:
http://blogs.netapp.com/virtualstorageguy/2010/01/new-vmware-and-netapp-protocol-performance-report.html