vSphere Datastore for specific datatypes: NFS or iSCSI

I am looking into improving performance in my vSphere environment. We are using a NetApp appliance with all VMs stored in Datastores that are mounted via NFS.

It was suggested to me that, for some specific workloads (like SQL data or a file server), it may be better for disk IO performance to use iSCSI for the data vHD.

In my example, the boot disk would be a normal VMDK stored in the NFS-attached datastore. The D drive (the disk where the SQL data or the fileserver data resides) would be a iSCSI-attached volume.

C: - VMDK disk container -> VMFS datastore -> NFS -> NetApp
D: - iSCSI -> NetApp

I am also pondering: iSCSI initiated from the vSphere level or directly from Windows?

Does anyone have any experience or thoughts with this?


First and foremost, before to mangle with storage, you should be 100% sure that your bottleneck is really related to disk/IO configuration.

It this is the case, an iSCSI share can be faster than a NFS one, but in specific scenario only (small random read/write packets). SQL servers can be one of these scenarios, so if you are sure that your problem is storage performance, you can try with an iSCSI share.

Deciding how to configure it depend on your specific needs. For maximum performance, you should use a fully-preallocated RAW volume, directly attached to the guest OS. This has the added advantage to make the guest configuration "self-contained", in the sense that migrating that guest to another virtualizer (even based on different tech), will not require to reconfigure the iSCSI share. (or, at most, the reconfiguration will be very limited).

On the other side, managing guest-attached, block-based virtual disk is surely more complex that use ESX to accomplish the same goal, so you should not use this setup if not really needed.

I suggest you to do some tests, benchmarking each configuration, before going into production.


I've not heard that iSCSI is better than NFS for SQL VMs, however if you do elect to create them, I would create the datastore on the ESX level, not install an iSCSI initiator on the VM.

One thing you need to be careful about is thin provisioning on the Netapp. The way they do block devices is different. You can find yourself with an offline LUN if you happen to configure it without preparing for the perfect storm of bad luck. The config you want is:

  • A thin volume of the size of the lun
  • A single thin lun inside that volume
  • Vol autogrow configured on the volume

The maximum you want to set for your vol autogrow will depend on whether you want to take snapshots of this lun. If you do, you need to estimate the rate of change and allow the volume to grow to large enough that it can handle the largest delta you expect before you delete the snaps.

You also want to be very careful about the lun and igroup settings- make sure they're set to VMWare. Also, you want to ensure that VAAI is enabled so that VMWare can unallocate zeros.


First things first: If you want to improve the performance of a VM you have to know where the bottleneck is. Improving storage performance doesn't help you if your environment lacks e.g. CPU performance.

I don't think changing your storage protocol from NFS to iSCSI will help you much. There are dozens of other parameters that influence your storage performance more than the protocol.

If you really have performance issues with your storage google for vmware netapp best practices; that should give you enough information.

Btw: Personally, I wouldn't use iSCSI inside a VM. If you want to move the VM to another storage array you'd have to do this both in vSphere and on the OS level. Using VMDKs makes life a lot easier.