Configuring RAID 10 using MegaRAID Storage Manager on an LSI 9260 RAID card

Using MegaRAID Storage Manager 13.04 on an LSI 9260 RAID card, which is the correct way to create RAID 10 using 8 drives with maximum redundancy?

There appears to be multiple ways to reach the same amount of expected disk space.

Spanned Drive Group 0, RAID 10
 Virtual Drive 0, VD_0, 2.181 TB, Optimal
   Drives:
     Span 0:
      Slot 0 SAS 558.912 GB
      Slot 1 SAS 558.912 GB
      Slot 2 SAS 558.912 GB
      Slot 3 SAS 558.912 GB
     Span 1:
      Slot 4 SAS 558.912 GB
      Slot 5 SAS 558.912 GB
      Slot 6 SAS 558.912 GB
      Slot 7 SAS 558.912 GB

Reports as RAID 10 with 2.181 TB usable

Spanned Drive Group 0, RAID 10
 Virtual Drive 0, VD_0, 2.181 TB, Optimal
   Drives:
     Span 0:
      Slot 0 SAS 558.912 GB
      Slot 1 SAS 558.912 GB
     Span 1:
      Slot 2 SAS 558.912 GB
      Slot 3 SAS 558.912 GB
     Span 2:
      Slot 4 SAS 558.912 GB
      Slot 5 SAS 558.912 GB
     Span 3:
      Slot 6 SAS 558.912 GB
      Slot 7 SAS 558.912 GB

Reports as RAID 10 with 2.181 TB usable

Which is the correct configuration to allow one of each mirrored pair to fail before the array fails?


Solution 1:

First off all, see this post: What are the different widely used RAID levels and when should I consider them

Notice the difference between RAID 10 and RAID 01.

Match this to your setups (both of them labeled as RAID 10 in your text). Look carefully.

Read the part in the link I posted where it states:
RAID 01 Good when: never Bad when: always

I think your choice should be obvious after this.


Edit: Stating things explicitly:

Your first setup is a mirrored pair of 4 drive stripes.

  Span 0:  4 drives in RAID0  \
                               }  Mirror from span 0 and span 1
  Span 1:  4 drives in RAID0  /

If any drives fails in a stripe then that stripe of lost.
In your case this means:

1 drive lost -> Working in degraded mode.

2 drives lost. Now we use some math.
If the drive fails in the the same span/stripe then you have the same result as 1 drive lost. Still degraded. One of the spans is off-line

If the second drive happens in the other span/stripe: Whole array off line. Consider your backups. (You did make and test those, right?)

The chance that a second drive fails in the wrong span is 4/7th (4 drives left in the working span, each of which can fail) and only 3/7th that is fails in the span which is already down. Those are not good odds.


Now the other setup, with a stripe of 4 mirrors.

1 drive lost (any of the 4 spans): Array still works.

2 drives lost: 85% chance that the array is still working. That is a lot better then in the previous case (which was 4/7th, or 57%).


TL:DR: use the second configuration: It is more robust.

Solution 2:

The second configuration is what you want. Multiple RAID1 mirrors striped together.

I don't understand why some controllers give people non-viable options.

Solution 3:

It seems that LSI decided to introduce their own crazy terminology: When you really want a RAID 10 (as defined since ages), you need to choose RAID 1 (!). I can confirm that it does mirroring and striping exactly the way I would expect it for a RAID 10, in my case for a 6 disk array and a 16 disk array.

Whatever you configure as RAID 10 in the LSI meaning of the term, seems to be more something like a "RAID 100", i.e., every "span" is it's own RAID 10, and these spans will be put together as a RAID 0. (Btw that's why it seems you can't define RAID 10 for numbers of disks other than multiples of 4, or multiple of 6 when using more than 3 disks.) Nobody seems to know what the the advantage of such a "RAID 100" could be, the only thing that seems to be for sure is that it has significant negative impact on performance compared to a good old RAID 10 (that LSI for whatever reason calls RAID 1).

This is the essence of the following very long thread, and I was able to reproduce the findings I mentioned above: http://community.spiceworks.com/topic/261243-raid-10-how-many-spans-should-i-use