Is there a reason to use a Storage Pool instead of creating a RAID-5 Volume?

It's a bad idea to use parity Storage Spaces because:

1) Single parity is dangerous: every time one disk dies and you start a rebuild process there's a heavy load applied to all remaining spindles, so there are great chances you'll get second, now deadly fault.

2) Performance is horrible. I mean it! ZFS has proper journaling and variable sized parity strips, while Storage Spaces have none.

Use RAID10 equivalent or a single node Storage Spaces Direct + ReFS and a multi-resilient disks.

https://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx#Controlling_the_Number_of_Columns

(that's for performance to build a proper RAID10 equivalent)

https://charbelnemnom.com/2017/06/how-to-create-a-multi-resilient-volume-with-refs-on-standalone-server-in-windows-server-2016-ws2016-hyperv-storagespaces/

(that's for multi-resilient disk, one will give you flash-in-mirror + disks-in-parity)


Unless you're doing a HEAVILY read-oriented system, Storage Spaces Parity mode is less than optimal. I'd strongly suggest using the Mirror mode. Do note that Mirror in Storage Spaces is NOT RAID1. It functions like RAID1E (mostly). It will divide your disks into chunks, then ensure all data exists on 2 disks (for 4 disks or fewer), or 3 disks (for 5 disks or greater). When combined with ReFS, with the integrity streams enabled and enforced, it will also checksum your data like ZFS does.

Also, I think you're confusing Storage Spaces with Storage Spaces Direct.

Windows Server 2016 Standard has Storage Spaces, but not Storage Spaces Direct. You do NOT need anything offered with the "Direct" as you're not doing clustered storage. There's a reason it's only offered in the DC version: it's not useful outside of a clustered scenario.

You can absolutely open up Server Manager and create a 3 disk "Mirror" pool, which will function like RAID1E (mostly), and give you 6TB available, rather than 8TB as you would get with Parity mode, but you get much better write performance, and better resiliency. You can add a 4th disk later and rebalance the pool to being more like RAID10 (2 columns, 2 stripes).

The RAID5 stuff in Disk Management is garbage, do not use it.


1) There is nothing inherently wrong with hardware RAID. RAID 5 has gotten a bad rap lately because disk sizes are increasing rapidly which is making for very large arrays and increasing the mathematical likelihood of unrecoverable array failure.

2) "Software RAID" like Storage Spaces comes in a lot of flavors and configurations. Some are bad, and some are quite good. This is ultimately a mixture of hardware and software that needs to be properly configured.

Why use "Storage Spaces" or ZFS vs a RAID controller: If you make a very large RAID array (we'll say something like 4x4tb RAID 5) your likelihood of a puncture (this simply a bad bit on an otherwise functional disk) is quite high. If you're using only a hardware RAID controller, the controller has no idea what you are or are going to install on the disks (nor does it care). It's simply using an algorithm to bond those disks into one big "physical" disk to your operating system. This is handy in a generic sense but in the event of drive failure you could lose an entire disk and then one of your GOOD disks could have a bad bit and cause a rebuild failure, costing your your precious, precious data. The controller has no idea what this data is, so it can't really help you recover partial data either (because it doesn't speak Windows or Linux or whatever) so it just says 'sorry for ya'.

If you run software RAID, you're going to need a LOT more hardware to manage the data movement that would normally be handled by a specialized chip on your RAID controller but in the event of a puncture, ZFS or ReFS (Storage Spaces) can at LEAST give you SOME data back.

RAID 5 vs RAID 6 vs RAID 10 etc: RAID 5 is getting a bad rap now because of what was just described. It is now said that in every 32TB of spinning rust you're essentially PROMISED a puncture. So, if you're running that 4x4tb RAID 5 and a disk goes bad, you really only have about a 50% chance of getting your data back during a rebuild if you're running hardware RAID! Pretty bad. Even a RAID 10 configuration of equivalent size (6x4tb) would likely take out half of the array in the rebuild process.

Now, SSDs don't suffer from punctures! SSD memory has kind of an internal RAID on each drive so you should never encounter a conventional 'puncture'. Multiple disk failure is still an option but FAR less likely (we'll all have to wait till the sizes get up there to really test this though). So you're much safer (and faster) with hardware RAID here generally.

TL;DR: Use Storage Spaces because disks fail a LOT and storage spaces with ReFS has extraneous parity and rebuild options. Same with ZFS. That said, FreeNAS/ZFS is significantly faster than Storage Spaces for various reasons (mostly that Storage Spaces wasn't really designed for a single computer). Use ECC memory! Backup important data!