RAID level and filesystem for a large storage server

ZFS doesn't like to be on top of hardware RAID. Probably you might just use ZFS on raw disks, and configure it in raidz2 or raid60 mode. Also, probably it's good to have a replacement drive nearby, or even leave a hot spare(s) in the rack.

See the performance benchmarks here: https://calomel.org/zfs_raid_speed_capacity.html


For such a big setup (384 TB raw space) I strongly suggest using ZFS, as its data integrity (and repair) guarantees are simply too valuable to ignore.

If for "read performance" you mean sequential read speed, I would use a ZFS RAIDZ2 array configured with 2x 12-wide vdevs. Moreover, a large recordsize and lz4 compression should be two good choices. If going down that route, please keep in mind that it is generally better to avoid hardware RAID when using ZFS.

If you need high random read performance (unlikely, based on you description) you need to use smaller ZFS RAIDZ2 vdevs or even mirrors (if losing 50% free space is tolerable).

The non-ZFS alternative would be to use an hardware-based RAID60 array (having at least 2+ GB of powerloss-protected writeback cache) and a classical non-CoW filesystem (ie: XFS). In this case, you can use lvmthin as volume manager and snapshot layer. That said, go with ZFS if you can.


Another recommendation is to have your OS separate to your data disks.

That supermicro chassis has two additional slots in the rear for 2.5" SATA disks. These should be RAID1 and contain the OS and any swap. The 24 disks out the front should just be for data in whatever RAID array or ZFS setup you choose.

enter image description here


I know you'd lose more capacity to parity but I'd personally go with R60 using 3 x 8 disk arrays, simply for the rebuild time, it won't benefit you in any way, but 12 x 16TB disks is a bit much for me personally - yes it'll work.

The other option given you want to use ZFS is to use ZRAID, I'm no expert but there are several here who are.