RAID-5 or RAID-6 - is RAID-5 really that bad? [duplicate]

Possible Duplicate:
Which is better: RAID5 + 1 Hotspare / RAID6?

I need to decide myself between RAID5 and RAID6.

The servers have a hardware-RAID controller and 6 drives each.

The drives are RE3 enterprise western digital 1TB drives. The data sheet says MTTF = 1.2Mio hours, Bit error rate = 1/10^15

On another Server there are even 6 Seagate SAS Drives (172GB each) with MTTF = 1.6Mio hours, Bit error rate = 1/10^16.

When doing the Math I get quite comfortable numbers for this setup (about 110 years to data-loss) with the SAS-drives even more. However this uses the manufacturer data. Is this realistic? Here are the formulas (on the last slides, it's in german - sorry: http://www.heinlein-support.de/sites/default/files/RAID-Mathematik_fuer_Admins.pdf

I've also found: http://blog.kj.stillabower.net/?p=37 - well these graph suggest that 6 drives can work, but for anything important one should resort to RAID6. This data however is older and also includes consumer drives?

So, any real world data on this? I see that using more than 8-9 disks is problematic. However it looks like 6 enterprise disks are still fine.

So what to do? RAID-5 or RAID-6?


Solution 1:

You want to go with RAID-6. The problem with RAID-5 and very large drives is that when you have a failure and have to rebuild the failed drive you now MUST be able to read every byte from the remaining drives. If you have a 7+1 (1 TB drive) RAID-5 set, this means that you need to accurately read 7 TB of data to rebuild the failed drive. I have personally experienced data loss during such a rebuild as undetected bad spots on the remaining drives are discovered during the rebuild.