Is using nested RAID 6+1+0 a good idea?
Currently I have 24 disks of 500 GB, and I would like to create a bigger nested RAID using a Dell Power Edge R730 server, having 4 times RAID 6 with six times 500GB each, over them I want to add RAID 1, and RAID 0 on top overall.
My question is: is this safe? Is it worth it creating an array this big?
I know that I will have an amazing speed, but the maintenance and fail over will cost me.
See proposed picture (in the picture there are 48 disks).
I think this is pushing the boundaries of the RAID concept and you will run into trouble, will the PERC Controller allow you to add virtual disks to another Array? Won't each have its own write and cache policy, what is the cache size on your controller - anyway if performance is what you are after then have you looked at Ceph? -its certified to run on the R730's but you would need an SSD journal disk - all writes happen to the SSD and are moved to the array later, it doesn't need raid for redundancy and offers object, block and file storage and erasure coding
Considering only the concepts involved at each layer of the stack and not the specific implementation, it could make sense for some applications to layer all of those three RAID modes.
However the layout in the depicted diagram has a serious flaw. You have ordered the layers incorrectly. In order for optimal performance and reliability you have to swap the order of RAID-1 and RAID-6 layers.
Usually RAID-6 is configured to tolerate the loss of two disks. So it is expected for a RAID-6 to fail once you have lost three disks. That means in the worst case the loss of three out of the 48 disks would cause one of the RAID-6 components to fail.
Your data would survive that incident, but you'd have to create a new RAID-6 from the 9 good disks and 3 new disks. After that has been done, you'd have to both synchronize a newly created RAID-6 as well as having the RAID-1 layer replicate from the other RAID-6 to the RAID-6 which is currently synchronizing. That's a really I/O-heavy operation.
So a case of 3 lost disks both requires administrator attention to recover, and is I/O heavy.
Instead you could first group the 24 disks into 12 pairs using RAID-1, and then combine those 12 RAID-1 into a RAID-6.
This way the loss of a single disk can always be recovered at the RAID-1 layer, which is much more efficient than recovery at the RAID-6 layer. And even in case of 5 lost disks you are guaranteed that the RAID-6 layer will survive.
In both cases your data will survive the loss of 5 disks, but there is a difference in how quickly you recover.
In both cases your data could be lost due to the loss of 6 disks, but the risk is much higher in your depicted scenario than if the layers are swapped around.
Implementation details
The more layers you use the higher the risk of running into cases that the specific implementation has problems handing. One question to keep in mind is whether hot spares can be shared among the various branches of the structure. Another is how automated recovery from one the loss of one of the sub-RAIDs would be. For example, if you lost both disks in one of the RAID-1s at the lowest layer, can it automatically create a new RAID-1 from two hot spares and use that as spare for the next layer?
I think the architecture you have in mind is overly complex without any reason I could think of. Essentially you are wasting 28 disks out of you 48 disk array for redundancy. The reason RAID6 was invented because many consider RAID1/RAID10 be too wasteful but you go even further applying RAID10 on top of RAID6.
I would recommend either using RAID10 altogether here if you indeed need all this redundancy or go with RAID6 + RAID0 (a.k.a RAID60).
Also, keep in mind that sensible size for RAID6 array is 8-20 disks with 12-16 being most common going beyond that is technically possible but impractical because of painstakingly long recovery times.