Slow Read/Write Performance over iSCSI SAN

This is a new setup of ESXi 4.0 running VMs off of a Cybernetics miSAN D iSCSI SAN.

Doing a high data read test on a VM, it took 8 minutes vs 1.5 minutes on the the same VM located on a slower VMWare Server 1.0 Host with the VMs located on local disk. I'm watching my read speeds from the SAN, and it's getting just over 3MB/s max read, and Disk Usage on the VM matches at just over 3MB/s....horribly slow.

The server and SAN are both connected to the same 1GB Switch. I have followed this guide

virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-w ith-vmware-vsphere.html

to get multipathing setup properly, but I'm still not getting good performance with my VMs. I know the SAN and the network should be able to handle over 100MB/s, but I'm just not getting it. I have two GB NICs on the SAN multipathed to two GB NICs on the ESXi Host. One NIC per VMkernel. Is there something else I can check or do to improve my speed? Thanks in advance for any tips.


That SAN hardware is certified for Vmware so get your support to look into it. Common causes for bad performance are overload the interface of the SAN hardware, because if you have multiple connections to the same SAN not all can be served at the maximum speed.

Also your local disk will always be faster than your SAN in your setup, because even a SATA disk will have a maximum of 3Gb/s bandwidth, so your SAN will never match the speed of your local disks. You probably are also using ethernet instead of fibre which is also not help performance.

You use a SAN not only because of the speed, but to have a central managed place where you can put all your importante data and make sure a suitable RAID level is being applied. There are also certain features like replication which is one of the advantages of having a SAN.


That set up should be able to deliver reasonable performance and from what I can gather that array can sustain around 60-70Megabytes per sec even for small block random IO. I've no experience with them but the spec indicates that it should easily be able to handle your requirements and the few review that searches throw up back that up.

Anyway if I were you I'd step back a little first. Get rid of multipathing (initially) and make sure you can get a single path (on the VMware side) to sustain respectable performance. Assuming you have an 8 drive unit, fully populated with 10k SAS drives, one hot spare and have a 7 drive RAID 5 pack it should be able to easily deliver >100Meg/sec sequential read or write over a single interface on a good,dedicated Gbit LAN even accounting for all ip\tcp and iSCSI overhead. Do simple bulk tests of large file copies (something significantly larger than the write cache on the array) to or from the SAN to check that you are seeing that. If you are reading and writing to the SAN volume then performance will be no more than half that BTW. If not then you will want to look at all the usual suspects:

  • For starters make sure the SAN's cache is configured correctly
  • Make sure all the drives are healthy - ie you're not fighting a RAID rebuild
  • Make sure the switch is healthy and not busy with other stuff - ideally you should isolate your SAN traffic onto its own switch, if you can't do that put it on its own VLAN.
  • Definitely don't put it on a cheap switch that is very busy with other stuff.
  • Check duplex and speed settings on all the ports (ESX, Switch & SAN)
  • Avoid messing with Jumbo Frames and ESX until you know everything else is working
  • Definitely enable hardware flow control on the switch

When you are testing make sure neither the ESX host or the SAN is busy with anything else.

Once you are successfully getting >100Meg/sec for sequential traffic on a single uplink then you can consider seeing if multipathing makes a difference. With iSCSI on ESX4 it can but it's unlikely unless the storage array correctly supports it in conjunction with ESX 4- I would look to the array vendor for guidance on that.