Low-end hardware RAID vs Software RAID [closed]

Solution 1:

A 10-20$ "hardware" RAID card is nothing more than a opaque, binary driver blob running a crap software-only RAID implementation. Stay well away from it.

A 200$ RAID card offer proper hardware support (ie: a RoC running another opaque, binary blob which is better and does not run on the main host CPU). I suggest to stay away from these cards also because, lacking a writeback cache, they do not provide any tangible benefit over a software RAID implementation.

A 300/400$ RAID card offering a powerloss-protected writeback cache is worth buying, but not for small, Atom-based PC/NAS.

In short: I strongly suggest you to use Linux software RAID. Another option to seriously consider is a mirrored ZFS setup but, with an Atom CPU and only 4 GB RAM, do not expect high performance.

For other information, read here

Solution 2:

Go ZFS. Seriously. It's so much better compared to hardware RAID, and reason is simple: It uses variable size strips so parity RAID modes (Z1 & Z2, RAID5 & RAID6) equivalents are performing @ RAID10 level still being extremely cost-efficient. + you can use flash cache (ZIL, L2ARC etc) running @ dedicated set of PCIe lanes.

https://storagemojo.com/2006/08/15/zfs-performance-versus-hardware-raid/

There's ZFS on Linux, ZoL.

https://zfsonlinux.org/

Solution 3:

Here is another argument for software on a cheap system.

Stuff breaks, you know this that is why you are using raid, but raid controllers also break, as does ram, processor, power-supply and everything else, including software. In most failures it is simple enough to replace the damaged component with an equivalent or better. Blow a 100w power-supply, grab a 150w one and get going. Similar with most components. However with a hardware raid there are now three exceptions to this pattern: raid controller, hard drives, and motherboard (or other upstream if not an expansion card).

Let's look at the raid card. Most raid cards are poorly documented, and incompatible. You cannot replace a card by company xyz with one by abc, as they store data differently (assuming you can figure out who made the card to begin with). The solution to this is to have a spare raid card, exactly identical to the production one.

Hard drives are not as bad as raid cards, but as raid cards have physical connectors to the drives you must use compatible drives and significantly larger drives may cause problems. Significant care is needed in ordering replacement drives.

Motherboards are typically more difficult than drives but less than raid cards. In most cases just verifying that compatible slots are available is sufficient but bootable raids may be no end of headaches. The way to avoid this problem is external enclosures, but this is not cheap.

All these problems can be solved by throwing money at the problem, but for a cheap system this is not desirable. Software raids on the other hand are immune to most (but not quite all) of these issues because it can use any block device.

The one drawback to software raid on a cheap system is booting. As far as I know the only bootloader that supports raid is grub and it only supports raid 1 which means your /boot must be stored on raid 1 which is not a problem as long as you are only using raid 1 and only a minor problem in most other cases. However grub itself (specifically the first stage boot block) cannot be stored on the raid. This can be managed by putting a spare copy on the other drives.

Solution 4:

  1. As others have said, there's no benefit to hardware RAID, and various downsides. My main reasons for preferring software RAID is that it's simpler and more portable (and thus more likely to actually have a successful recovery from various failure scenarios).

  2. (Also as others have said) 3 disk RAID 5 is a really bad RAID scheme -- it's almost the worst of all worlds, with very little benefit. Sort of a compromise between RAID 0 and RAID 1, and slightly better than either of those, but that's about the only good thing to say about it. RAID has moved on to much better schemes, like RAID 6.

  3. My advice (hardware):

    • Get a 4-port SATA card for that PCI slot, bringing you to six total SATA ports -- one for a boot drive, and five for data drives. I see one for ~$15, advertised as hardware RAID, but you can just ignore those features and use it as plain SATA.

    • Get a small SSD for the boot drive. I know there's still the perception that "SSDs are too expensive", but it's barely true anymore, and not at all on the small end -- 120GB is way more than you'll need for this boot drive, and you can get one for ~$25.

    • An optional but really nice addition (if your PC case has 3x 5.25" drive bays) is to get a drive bay converter: you can turn 3 5.25" (optical) drive bays into 5 hot-swappable front-loading 3.5" (HDD) bays, so you wont have to take the machine apart (or even shut it down) to swap drives. (Search for "backplane 5 in 3".)

    • Use 5x whatever size HDDs in RAID 6 (dual redundancy, 3x drive size usable space).

  4. My advice (software): Look at OpenMediaVault for the OS / file-server software. It's an "appliance distro" perfect for exactly this kind of use -- Debian-based (actually a Linux port of the BSD-based FreeNAS) with everything pre-configured for a NAS server. It makes setting up and managing software RAID (as well as LVM, network shares, etc.) really simple.