The setup of S2D that delivers up to 2M IOPS to SQL FCI
We are about to deploy shared storage researching for ultra-fast storage to implement Microsoft SQL Server Failover Cluster (FCI). So far the project goes, we would to start with 500K IOPS for 8k blocks about 70r/30w pattern. Also we would like to have an ability to encrease pefromance up to 2M IOPS (for same pattern) in a year or so, due to the SQL server growing expectations.
For the purpose of the project, we are going to deploy 4-node cluster of Microsoft Storage Spaces Direct (S2D). As for hardware we already have 2x Dell rack servers R730xd with 2x E5-2697 and 512GB RAM and we are ready to get 2 more.
As for storage, Microsoft recommends going with NVMe or NVMe + SSD to obtain maximum performance (source). Therefore, after some research, Samsung SSDs are good to go with. https://www.starwindsoftware.com/blog/benchmarking-samsung-nvme-ssd-960-evo-m-2 http://www.storagereview.com/samsung_960_pro_m2_nvme_ssd_review
The setup we consider is following: 1x Samsung 960 EVO NVMe + 4x Samsung PM863 SSD per S2D host.
Can S2D implementation using Samsung 960 EVO NVMe and Samsung PM863 deliver 500k to SQL FCI?
EDIT:
a) didn't you ask something similar the other day? - I did. A new question was posted since the first shot was off-topic. Subject and body are changed. Previous question will be deleted.
b) they're consumer drives, - The question is about to find the setup of S2D that could house required 500k IOPS on start. What setup would you recommend?
c) how are you planning on connecting all of those, I'm unaware of a server out there with 5 x M.2 slots - we need to know this, - Only 1x M.2 drive per each node is to be used. I have corrected the setup of shared storage: 1x Samsung 960 EVO NVMe + 4x Samsung PM863 SATA SSD per S2D host.
d) what kind of IOPSs (size and type)? - SQL FCI read intensive workload of 4k, 8k, 64k blocks. Reads range is 70-90% and writes one - 30-10%.
e) 500k-to-2M is a very wide range of requirement variance - why such a wide range? - The project performance is expected to significantly grows in sort period, so we must have ability to run 4x workload on same hardware till the and of the first year. A year after we will add 4x more hosts to cluster.
We are Microsoft Shop so there is no option to go eslewhere but Microsoft SQL Server 2016. Also, as you might consume the project requires redundancy and extra availability therefore SQL Failover Cluster Intance will be deployed aside S2D.
It's a bad idea to use consumer SSDs in your SDS deployments. VMware VSAN and Microsoft S2D both assume writes will be "atomic", so one ACK-ed by host is actually on persistent memory; consumer SSDs don't have any power outage protection so they MIGHT lose your data. Write endurance is also very different.
https://blogs.technet.microsoft.com/filecab/2016/11/18/dont-do-it-consumer-ssd/
https://blogs.vmware.com/vsphere/2013/12/virtual-san-hardware-guidance-part-1-solid-state-drives.html
http://www.yellow-bricks.com/2013/09/16/frequently-asked-questions-virtual-san-vsan/
I'd suggest to stick with some Enterprise-grade NVMe cards.