What should I take into consideration when choosing a RAID type?

There is now a canonical answer for this question over on here that outlines each and every type of RAID in use today, when you should and shouldn't use each one, and how to calculate usable RAW capacity of each RAID type.

I administer the hardware that runs a website that receives 10,000 daily visitors. What variables should I consider when deciding which RAID type to choose for our web application? The server is a HP-DL380 G6 Server with 6 146GB (SCSI) HDDs.


Solution 1:

I know that machine and those disks very well but you've not told us what the application does and how much space you need - this is important because you have a few choices, let's go through them.

RAID 0 - You'd have 6 x 146GB (actually you only get about 95% of that 146GB, take this into consideration throughout) available to you application - this gets you the most space but is a bad idea as when one disk fails it kills your whole data - avoid this.

RAID 1 - you could theoretically setup 3 x (2 x 146GB) RAID 1 arrays - there may be a good reason to do this, you could have your OS on one pair, your database on another pair and your logs on another pair - if your application is a write-heavy database this isn't the worst way to go if your data would fit in that little space - ignore this if your application isn't a database.

Ignore RAID 3 and 4 - they're old-school (unless you're NetApp of course :) )

RAID 5 - this would give you 5 x 146GB of space and could survive a single disk failure - RAID 5 is kind-of hated by quite a lot of us geeks for dull, but valid, statistical reasons - if your application is something that does mostly reads against an easily restorable or transient data set then RAID 5 can gives you a pretty good balance.

RAID 6 - this would give you 4 x 146GB but allow you to survive two concurrent disk failures - I would avoid this mode given your setup as it gives you a write performance penalty and only gives you a single disk of extra space over...

RAID 10 - this splits your 6 disks into two piles of three disks - so you only get 3 x 146GB of space - BUT it can survive up to three disks failing without performance hits, in fact it's often the fastest mode for both reads and writes - if you can live with only 3 disks-worth of space then this mode would be great for most applications.

RAID 50 - well you've only just about got enough disks for this and you'd only get 2 x 146GB of space for zero benefit in your situation and you don't have enough disks for RAID 60.

So you can see how your application requirements will drive your disk layout - let us know what you're doing and we'll point you to one mode over another ok.

Now one other thing to consider is that you have 2 spare drive slots left on that model - I'd urge you to fill them now. The reason is that if you've got a database system you can have a RAID 1 pair for your OS and apps, a pair of 2 disks for data and a pair of disks for database logs - this will be pretty fast and give you 2 x 146GB of database space instead of just 1 x 146GB - does that make sense? If you have a more 'read-only' type of setup then having 8 disks will allow you to have a much larger and/or much faster RAID 5/6 or 10 setup and these are best created like this on day one that added to later.

I hope this helps and come back with more information please, oh and p.s. good choice on the server, they're great - have you got iLO working yet, it's a life-saver :)