Using software-RAID vs "firmware"-RAID (a.k.a. FakeRAID) [closed]

We recently bought a tower server on which I want to install Debian. I thought the device had hardware RAID, as I could see a BIOS screen. As it turns out, it's still a software RAID. When I configure the RAID drives through that firmware, I can still see both hard-drives in the Debian installer. When I try to repartition from within the installer, Debian warns that the software RAID drives would be lost.

I was slightly disappointed by this, as I always thought hardware-RAID would yield better performance. But anyhow, my question now is whether I should use the Debian installer to configure the RAID drives, and disable this "firmware" RAID from the main BIOS? Or should I create separate drives using this firmware, and not use the installer? Are there any reliability benefits of using the firmware?

This firmware version is:

LSI MegaRaid Software RAID BIOS Version A.10.10211615R
LSI SATA RAID Found at PCI Bus No:00 Dev No:1F

Update I can see from the comments that others were also confused as to whether I was dealing with a Hardware RAID. The machine I have is a Lenovo ThinkServer TS 440. In its datasheet, it says that is has

Integrated SATA SW RAID 0/1

As I understand, there is a hardware component included in a chip on the motherboard (which is why it has a PCI address?), but is not a typical hardware RAID.


Solution 1:

How I Learned to Stop Worrying and Love Software RAID:

Back when all of our server storage was on SCSI disks, I used to be pretty particular about using good hardware RAID -- we used HP/Compaq SmartArray controllers and had extremely good luck with them. I operated for a long time under the assumption that software RAID would cause a performance hit and not be as robust to failure as good, battery-backed hardware RAID controllers.

However, as we've moved to cheaper SAS and SATA storage, I've come to appreciate software RAID a whole lot more. Realizing that I could simply build a RAID array in the OS directly created a lot of flexibility and cost savings (true hardware controllers with battery-backed cache are still expensive) without too much of a performance hit (depending of course on the rest of the hardware).

The tradeoff in our case has boiled down to that robustness and performance vs. low-cost and flexibility. Unfortunately, most "fakeraid" solutions are the worst of both worlds: the performance is still less than true hardware RAID, since in most cases the CPU and system memory are doing much of the work rather than the fakeraid controller, robustness to failure is generally less than hardware RAID as the controllers are cheaper and again rely on the rest of the system hardware, and the flexibility is less as you can't necessarily rebuild an array on different hardware (as you could in pure software RAID). Pure hardware or software RAID is almost always a better choice than fakeraid.

With all that said, here are my recommendations for making software RAID work in Debian on a system with a fakeraid controller:

  • Disable the "RAID" firmware entirely in the BIOS -- set it to ACHI (if you're using SATA), JBOD or whatever other setting is likely to let you pass the disks through to the OS as directly as possible.

  • Use mdraid/mdadm rather than dmraid.

  • Use cat /proc/mdstat to check on failure/rebuild status. Check this often, and set up the automated email alerting for when a disk fails.

  • For best results, keep RAID at the bottom of your storage stack. If you plan to use encryption and/or LVM, create those volumes on top of the RAID array (see this question for some specifics and note that the issue mentioned seems to have been resolved in more recent debian/ubuntu).

  • Keep your kernel as up-to-date as possible, especially if you use SSD - Features like TRIM support are being added and improved continually.