Why is RAID 1+6 not a more common layout?

Why are the nested RAID levels 1+5 or 1+6 almost unheard of? The nested RAID levels Wikipedia article is currently missing their sections. I don't understand why they are not more common than RAID 1+0, especially when compared to RAID 1+0 triple mirroring.

It is apparent that rebuilding time is becoming increasingly problematic as drive capacities are increasing faster than their performance or reliability. I'm told that RAID 1 rebuilds quicker and that a RAID 0 array of RAID 1 pairs avoids the issue, but surely so would a RAID 5 or 6 array of RAID 1 pairs. I'd at least expect them to be a common alternative to RAID 1+0.

For 16 of 1TB drives, here are my calculations of the naïve probability of resorting to backup, i.e. with the simplifying assumption that the drives are independent with even probability:

RAID | storage | cumulative probabilities of resorting to backup /m
 1+0 |     8TB | 0, 67, 200, 385, 590, 776, 910, 980, 1000, 1000, 1000
 1+5 |     7TB | 0,  0,   0,  15,  77, 217, 441, 702,  910, 1000, 1000
 1+6 |     6TB | 0,  0,   0,   0,   0,   7,  49, 179,  441,  776, 1000
(m = 0.001, i.e. milli.)

If this is correct then it's quite clear that RAID 1+6 is exceptionally more reliable than RAID 1+0 for only a 25% reduction in storage capacity. As is the case in general, the theoretical write throughput (not counting seek times) is storage capacity / array size × number of drives × write throughput of the slowest drive in the array (RAID levels with redundancy have a higher write amplification for writes that don't fill a stripe but this depends on chunk size), and the theoretical read throughput is the sum of the read throughputs of the drives in the array (except that RAID 0, RAID 5, and RAID 6 can still be theoretically limited by the slowest, 2nd slowest, and 3rd slowest drive read throughputs respectively). I.e., assuming identical drives, that would be respectively 8×, 7×, or 6× maximum write throughput and 16× maximum read throughput.

Furthermore, consider a RAID 0 quadruple of RAID 1 triples, i.e. RAID 1+0 triple mirroring of 12 drives, and a RAID 6 sextuple of RAID 1 pairs, i.e. RAID 1+6 of 12 drives. Again, these are identical 1TB drives. Both layouts have the same number of drives (12), the same amount of storage capacity (4TB), the same proportion of redundancy (2/3), the same maximum write throughput (4×), and the same maximum read throughput (12×). Here are my calculations (so far):

RAID      | cumulative probabilities of resorting to backup /m
1+0 (4×3) | 0, 0, 18,  ?,   ?,   ?,   ?,   ?, 1000
1+6 (6×2) | 0, 0,  0,  0,   0,  22, 152, 515, 1000

Yes, this may look like overkill, but where triple mirroring is used to split-off a clone for backup, RAID 1+6 can just as well be used, simply by freezing and removing 1 of each drive of all but 2 of the RAID 1 pairs, and while doing so, it still has far better reliability when degraded than the degraded RAID 1+0 array. Here are my calculations for 12 drives degraded by 4 in this manner:

RAID      | cumulative probabilities of resorting to backup /m
1+0 (4×3) | (0, 0, 0, 0), 0, 143, 429, 771, 1000
1+6 (6×2) | (0, 0, 0, 0), 0,   0,  71, 414, 1000

Read throughput, however, could be degraded down to 6× during this time for RAID 1+6, whereas RAID 1+0 is only reduced to 8×. Nevertheless, if a drive fails while the array is in this degraded state, the RAID 1+6 array would have a 50–50 chance of staying at about 6× or being limited further to 5×, whereas the RAID 1+0 array would be limited down to a bottleneck. Write throughput should be pretty unaffected (it may even increase if the drives taken for backup were the limiting slowest drives).

In fact, both can be seen of as ‘triple mirroring’ because the degraded RAID 1+6 array is capable of splitting-off an additional RAID 6 group of 4 drives. In other words, this 12-drive RAID 1+6 layout can be divided into 3 degraded (but functional) RAID 6 arrays!

So is it just that most people haven't gone into the maths in detail? Will we be seeing more RAID 1+6 in the future?


Generally I'd say RAID 1+0 will tend to be more widely used than 1+5 or 1+6 because RAID 1+0 is reliable enough and provides marginally better performance and more usable storage.

I think most people would take the failure of a full RAID 1 pair within the RAID 1+0 group as a pretty incredibly rare event that's worth breaking out the backups for - and probably aren't too enthusiastic about getting under 50% of their physical disk as usable space.

If you need better reliability than RAID 1+0, then go for it! ..but most people probably don't need that.


The practical answer lies somewhere at the intersection of hardware RAID controller specifications, average disk sizes, drive form-factors and server design.

Most hardware RAID controllers are limited in the RAID levels they support. Here are the RAID options for an HP ProLiant Smart Array controller:

[raid=0|1|1adm|1+0|1+0adm|5|50|6|60]

note: the "adm" is just triple-mirroring

LSI RAID controllers support: 0, 1, 5, 6, 10, 50, and 60

So these controllers are only capable of RAID 50 and 60 as nested levels. LSI (née Dell PERC) and HP comprise most of the enterprise server storage adapter market. That's the major reason you don't see something like RAID 1+6, or RAID 61 in the field.

Beyond that consideration, nested RAID levels beyond RAID 10 require a relatively large number of disks. Given the increasing drive capacities available today (with 3.5" nearline SAS and SATA drives), coupled with the fact that many server chassis are designed around 8 x 2.5" drive cages, there isn't much of an opportunity to physically configure RAID 1+6, or RAID 61.

The areas where you may see something like RAID 1+6 would be large chassis software RAID solutions. Linux MD RAID or ZFS are definitely capable of it. But by that time, drive failure can be mitigated by hot or cold-spare disks. RAID reliability isn't much of an issue these days, provided you avoid toxic RAID level and hardware combinations (e.g. RAID 5 and 6TB disks). In addition, read and write performance would be abstracted by tiering and caching layers. Average storage workloads typically benefit from one or the other.

So in the end, it seems as though the need/demand just isn't there.


  • You have diminishing returns on reliability. RAID 6 is pretty unlikely to compound failure even on nasty SATA drives with a 1 in 10^14 UBER rate. On FC/SAS drives your UBER is 1 in 10^16 and you get considerably more performance too.

  • RAID group reliability doesn't protect you against accidental deletion. (so you need the backups anyway)

  • beyond certain levels of RAIDing, your odds of a compound failure on disks becomes lower than compound failure of supporting infrastructure (power, network, aircon leak, etc.)

  • Write penalty. Each incoming write on your RAID 61 will trigger 12 IO operations (naively done). RAID 6 is already painful in 'low tier' scenarios in terms of IOPs per TB random write. (and in higher tier, your failure rate is 100x better anyway)

  • it's not '25% reduction' it's a further 25% reduction. Your 16TB is turning into 6TB. So you're getting 37.5% usable storage. You need 3x as many disks per capacity, and 3x as much datacentre space. You would probably get more reliability by simply making smaller RAID6 sets. I haven't done the number crunching, but try - for example the sums of RAID 6 in 3x 3+2 sets (15 drives, less storage overhead than your RAID10). Or doing 3 way mirrors instead.

Having said that - it's more common than you think to do it for multi-site DR. I run replicated storage arrays where I've got RAID5/6/DP RAID groups asynchronously or synchronously to a DR site. (Don't do sync if you can possibly avoid it - it looks good, it's actually horrible).

With my NetApps, that's a metrocluster with some mirrored aggregates. With my VMAXes we've Symmetrix Remote Data Facility (SRDF). And my 3PARs do remote copy.

It's expensive, but provides 'data centre catching fire' levels of DR.

Regarding triple mirrors - I've used them, but not as direct RAID resilience measures, but rather as full clones as part of a backup strategy. Sync a third mirror, split it, mount it on a separate server and back that up using entirely different infrastructure. And sometimes rotate the third mirror as a recovery option.

The point I'm trying to make is that in my direct experience as a storage admin - in a ~40,000 spindle estate (yes, we're replacing tens of drives daily) - we've had to go to backups for a variety of reasons in the last 5 years, but none of them have been RAID group failure. We do debate the relative merits and acceptable recovery time, recovery point and outage windows. And underpinning all of this is ALWAYS the cost of the extra resilience.

Our array all media scrub and failure predict, and aggressively spare and test drives.

Even if there were a suitable RAID implementation, cost-benefit just isn't there. The money spent on the storage space would be better invested in a longer retention or more frequent backup cycle. Or faster comms. Or just generally faster spindles, because even with identical resilience numbers, faster rebuilding of spares improves your compound failure probability.

So I think I would therefore offer the answer to your question:

You do not see RAID 1+6 and 1+5 very often, because the cost benefit simply doesn't stack up. Given a finite amount of money, and given a need to implement a backup solution in the first place, all you're doing is spending money to reduce your outage frequency. There are better ways to spend that money.


Modern and advanced systems don't implement shapes like that because they're excessively complicated, completely unnecessary, and contrary to any semblance of efficiency.

As others have pointed out, the ratio of raw space to usable space is essentially 3:1. That is essentially three copies (two redundant copies). Because of the calculation cost of "raid6" (twice over, if mirrored), and the resulting loss of IOPS, this is very inefficient. In ZFS, which is very well designed and tuned, the equivalent solution, capacity-wise would be to create a stripe of 3-way mirrors.

As an example, instead of a mirror of 6-way raid6/raidz2 shapes (12 drives total), which would be very inefficient (also not something ZFS has any mechanism to implement), you would have 4x 3-way mirrors (also 12 drives). And instead of 1 drive worth of IOPS, you would have 4 drives worth of IOPS. Especially with virtual machines, that is a vast difference. The total bandwidth for the two shapes may be very similar in sequential reads/writes, but the stripe of 3-way mirrors would definitely be more responsive with random read/write.

To sum up: raid1+6 is just generally impractical, inefficient, and unsurprisingly not anything anyone serious about storage would consider developing.

To clarify the IOPS disparity: With a mirror of raid6/raidz2 shapes, with each write, all 12 drives must act as one. There is no ability for the total shape to split the activity up into multiple actions that multiple shapes can perform independently. With a stripe of 3-way mirrors, each write may be something that only one of the 4 mirrors must deal with, so another write that comes in doesn't have to wait for the whole omnibus shape to deal with before looking at further actions.


Since noone said it directly enough: Raid6 write performance is not marginally worse. It is horrible beyond description if put under load.

Sequential writing is OK and as long as caching, write merging etc. is able to cover it up, it looks ok. Under high load, things look bad and this is the main reason a 1+5/6 setup is almost never used.