How many disks is too many in this RAID 5 configuration?
Solution 1:
I've wrestled with this question for a while. There are a number of factors determining how many disks should go into a RAID5 array. I don't know the HP 2012i, so here is my generic advice for RAID5:
-
Non-recoverable read error rate: When a non-recoverable read error occurs, that read fails. For a healthy RAID5 array this is no problem since the missed read can be found in the parity information. If one happens during a rebuild, when the entire RAID5 set is read in order to regenerate the parity info, it can cause the entire RAID5 array to be lost. This rate is measured like this: "1 per 1014 bits" and is found on the detail tech-specs for drives. You do not want your RAID5 array to be any more than half that size. Enterprise drives (10K RPM SAS qualifies) can go longer than Desktop drives (SATA).
- For an example of this spec, Seagate Barracuda ES.2 data-sheet.
- Performance degradation during rebuilds: If performance noticeably sucks during rebuilds, you want to make sure your array can rebuild quickly. In my experience write performance tends to suck a lot worse during rebuilds than reads. Know your I/O. Your tolerance for bad I/O performance will put an upper limit on how large your RAID5 array can get.
- Performance degradation during other array actions: Adding disks, creating LUNs, changing stripe widths, changing RAID levels. All of these can impact performance. Some controllers are very good about isolating the performance hit. Others aren't so good. Do some testing to see how bad it gets during these operations. Find out if restriping a 2nd RAID5 array impacts performance to the first RAID5 array.
- Frequency of expand/restripe operations: Adding disks, or on some controllers creating new LUNs, can also cause the entire array to redo parity. If you plan on active expansion, then you'll be running into this kind of performance degradation much more often than simple disk failure-rate would suggest.
RAID6 (double parity) is a way to get around the non-recoverable read error rate problem. It does increase controller overhead though, so be aware of the CPU limits on your controllers if you go there. You'll hit I/O bottlenecks faster using RAID6. If you do want to try RAID6, do some testing to see if it'll behave the way you need it to. It's a parity RAID, so it has the same performance penalties for rebuilds, expansions, and restripes as RAID5, it just lets you grow larger in a safer way.
Solution 2:
There isn't really a clear limit on the number of disks in RAID 5 per se. The limits you'll meet are typically related to RAID 5's relatively poor write performance, and limitations imposed elsewhere (RAID controller, your own organization of data etc).
Having said that, with 7-8 disks in use you're close to upper bound on the common RAID 5 deployments. I'd guesstimate that the wast majority of RAID 5 deployments are using <10 disks. If more disks are wanted one would generally go for nested RAID levels such as RAID "50".
I'm more puzzled by your choice to keep one big array for all this. Would your needs not be better served by 2 arrays, one RAID5 for slow, mostly read data, and one RAID 10 for more I/O intensive data with more writes?