mdadm how to use an array as a "member disk" in another array
I HAD a 4-disk RAID 5 array with MDADM. One of the disks failed. I bought a new disk three times the size of original disks as replacement.
I now have
- 3x5TB disks in a degraded RAID5
- 1x16TB disk free unused
I want to change the existing raid from 5 to a-sort-of 1+0 where only the old disks would be the "0" part:
- 3x5TB disks in RAID0
- 1x16TB disk
The above would be assembled in the RAID 1 I want to end up with.
I'm thinking I should:
- add the new disk in a new, degraded, RAID 1
- copy all the data from the degraded RAID 5
- change mount points to the new array
- tear down the RAID5 and make a RAID0 from it
- add the RAID0 to the RAID1
Alternatively, as a fallback I could replace the last step with creation of a btrfs RAID 1 volume.
Could this work?
Solution 1:
Yes, this should work. But there is better way to achieve something like this.
If you ever wondered how RAID10 or RAID60 are built, it's like this: system builds a number of small RAID1 or RAID6 arrays and then combines them into large "RAID0" array. Not vice versa, like having many RAID0-s mirrored or assembled with additional parity devices.
To achieve similar setting, do the following:
- Partition large disk to three partitions, each to be equal the size of a single partition of smaller disk,
- Make "degraded" RAID1 arrays out of each partitions, like
mdadm --create /dev/mdN -l1 -n2 /dev/sdXY missing
- Make them LVM PVs
pvcreate /dev/mdN
and build LVM VG out of these three PVsvgcreate my_vg /dev/mdN /dev/mdM /dev/mdP
, - Create logical volumes as needed, migrate data, remove old array
- Repartition smaller disks to have a single partition and add each disk into its RAID1s
This way you:
- avoid MD over MD (which could be assembled by hand, but I am not sure it will assemble automatically on boot)
- introduce LVM, which improves volume management; the LVM over MD is very standard and supported configuration
- when one of smaller disks die, you'll replace it and only resync that part; if you went the "raid1 out of raid0" way, you'd to sync the whole data.
This last argument actually describes why the redundancy is always done on the lowest level and combining of these smaller redundant pieces (stripes) is given to the higher levels.