Using a high end NAS as a vSphere/HyperV Storage
I have a project on to replace the storage that our virtual farms are using. We replaced the hosts last year but due to budget constraints we couldn't afford to replace the storages until now.
A high end NAS from a well respected company has come to my attention. This NAS has 24 drive bays, 16GB RAM, an 8 core Xeon CPU and two 10GbaseT network interfaces.
I can outfit two of these things entirely with 960GB Samsung enterprise SSDs and still pay less than what I would for a single SAN from the likes of Dell outfitted with less spinning rust storage. I feel I can't ignore this.
So I guess I have two questions:
1) Could the NAS cope with the workload? The farm has three virtual hosts. It holds pretty much all of the business's servers including user file storage, DCs and a few SQL databases.
2) This thing talks iSCSI and NFS. It strikes me as a fairly bad idea to present this thing as block storage to the virtual hosts when in fact it isn't. Layering two file systems (VMFS and Ext3) on this thing seems wasteful whereas at least if I use NFS the VMDKs would be stored directly on the main file system. Would I be better using NFS or iSCSI?
Solution 1:
So I guess I have two questions:
1) Could the NAS cope with the workload? The farm has three virtual hosts. It holds pretty much all of the business's servers including user file storage, DCs and a few SQL databases.
A: It's absolutely OK for NAS to handle your workload! Since 2012 it's preferred by Microsoft to have SMB3 which is file over iSCSI/FC protocols which are block. The problem is - very few SAN/NAS vendors have properly implemented SMB3 stack: most have issues with SMB Multichannel and SMB3 Direct (RDMA), and these guys are major driving force to adopt SMB3 in production. For example NetApp...
https://library.netapp.com/ecmdocs/ECMP1196891/html/GUID-3E1361E4-4170-4992-85B2-FEA71C06645F.html
Data ONTAP does not support the following SMB 3.0 functionality: SMB Multichannel SMB Direct SMB Directory Leasing SMB Encryption
2) This thing talks iSCSI and NFS. It strikes me as a fairly bad idea to present this thing as block storage to the virtual hosts when in fact it isn't. Layering two file systems (VMFS and Ext3) on this thing seems wasteful whereas at least if I use NFS the VMDKs would be stored directly on the main file system. Would I be better using NFS or iSCSI?
A: It's absolutely OK to use iSCSI with VMware, Hyper-V is OK-ish, but NFS is VMware only, you can't do NFS with Hyper-V (or SQL Server if you care, it isn't as bad but has own limitations).
https://www.starwindsoftware.com/blog/hyper-v-vms-on-nfs-share-why-hasnt-anyone-thought-of-that-earlier-they-did-in-fact-2
http://windowsitpro.com/hyper-v/hyper-v-vms-nfs
https://www.brentozar.com/archive/2012/01/sql-server-databases-on-network-shares-nas/
Back to iSCSI vs NFS. I guess both are identical from the performance point of view (unless you do iSER which isn't working great with ESXi 6.5), but NFS is much easier to manage!
http://www.unadulteratednerdery.com/2014/01/15/storage-for-vmware-setting-up-iscsi-vs-nfs-part-1/
http://community.netapp.com/t5/Network-Storage-Protocols-Discussions/NFS-or-iSCSI-for-ESXi-5-5-and-or-6/td-p/114345
My bet is on NFS here!
Solution 2:
Have you ever thought of virtual SAN approach to building your storage? For the most part, it would be some sort of hyper-converged setup where you could have all the nodes utilizing the local storage and replicating the data between each other.
For the case, I can recommend two most obvious storage players.
HPE StoreVirtual: https://www.hpe.com/us/en/storage/storevirtual.html
StarWind vSAN: https://www.starwindsoftware.com/starwind-virtual-san-free
We were having a true love story with the latter, btw :) Just two all-flash nodes, 10 + 1gbe networking and dozen of VMs running on top. What I can say is that we've eventually got a redundancy at the nodes level and high I/O rate due to the fact that all the nodes in cluster were processing the traffic simultaneously, becoming a real active-active hci setup.
Solution 3:
DISCLAIMER: If you pay for VMWare support, you probably don't want to use anything that's not VMWare certified.
Without knowing your system itself, we have measured 10G iSCSI vs. 16G Fibre Channel vs. 10G NFS on our own "cheap" Storage Systems. So far what we've seen is Fibre Channel is slightly better than iSCSI and both outperform NFS.
So if I had to decide it, I'd go with iSCSI.