Best way to grow Linux software RAID 1 to RAID 10

With linux softraid you can make a RAID 10 array with only two disks.

Device names used below:

  • md0 is the old array of type/level RAID1.
  • md1 is the new array of type/level RAID10.
  • sda1 and sdb2 are new, empty partitions (without data).
  • sda2 and sdc1 are old partitions (with crucial data).

Replace names to fit your use case. Use e.g. lsblk to view your current layout.

0) Backup, Backup, Backup, Backup oh and BACKUP

1) Create the new array (4 devices: 2 existing, 2 missing):

mdadm -v --create /dev/md1 --level=raid10 --raid-devices=4 /dev/sda1 missing /dev/sdb2 missing

Note that in this example layout sda1 has a missing counterpart and sdb2 has another missing counterpart. Your data on md1 is not safe at this point (effectively it is RAID0 until you add missing members).

To view layout and other details of created array use:

mdadm -D /dev/md1

Note! You should save the layout of the array:

# View current mdadm config:
cat /etc/mdadm/mdadm.conf
# Add new layout (grep is to make sure you don't re-add md0):
mdadm --detail --scan | grep "/dev/md1" | tee -a /etc/mdadm/mdadm.conf
# Save config to initramfs (to be available after reboot)
update-initramfs -u

2) Format and mount. The /dev/md1 should be immediately usable, but need to be formatted and then mounted.

3) Copy files. Use e.g. rsync to copy data from old RAID 1 to the new RAID 10. (this is only an example command, read the man pages for rsync)

rsync -arHx / /where/ever/you/mounted/the/RAID10

4) Fail 1st part of the old RAID1 (md0), and add it to the new RAID10 (md1)

mdadm /dev/md0 --fail /dev/sda2 --remove /dev/sda2
mdadm /dev/md1 --add /dev/sda2

Note! This will wipe out data from sda2. The md0 should still be usable but only if the other raid member was fully operational.

Also note that this will begin syncing/recovery processes on md1. To check status use one of below commands:

# status of sync/recovery
cat /proc/mdstat
# details
mdadm -D /dev/md1

Wait until recovery is finished.

5) Install GRUB on the new Array (Assuming you're booting from it). Some Linux rescue/boot CD works best.

6) Boot on new array. IF IT WORKED CORRECTLY Destroy old array and add the remaining disk to the new array.

POINT OF NO RETURN

At this point you will destroy data on the last member of the old md0 array. Be absolutely sure everything is working.

mdadm --stop /dev/md0
mdadm /dev/md0 --remove /dev/sdc1
mdadm /dev/md1 --add /dev/sdc1

And again - wait until recovery on md1 is finished.

# status of sync/recovery
cat /proc/mdstat
# details
mdadm -D /dev/md1

7) Update mdadm config

Remember to update /etc/mdadm/mdadm.conf (remove md0).

And save config to initramfs (to be available after reboot)

update-initramfs -u

Follow the same procedure as Mark Turner but when you create the raid array, mention 2 missing disks

mdadm -v --create /dev/md1 --level=raid10 --raid-devices=4 /dev/sda1 missing /dev/sdb2 missing

And then proceed with other steps.

In short, create RAID10 with total 4 disks(out of which 2 are missing), resync, add other two disks after that.


Just finished going from LVM on two 2TB disk mdadm RAID 1 to LVM on a four disk RAID 10 (two original + two new disks).

As @aditsu noted the drive order is important when creating the array.

mdadm -v --create /dev/md1 --level=raid10 --raid-devices=4 /dev/sda missing /dev/sdb missing

Code above gives a usable array with two missing disks (add partition numbers if you aren't using whole disks). As soon as the third disk is added it will begin to sync. I added the fourth disk before the third finished syncing. It showed as a spare until the third disk finished then it started syncing.

Steps for my situation:

  1. Make good backup.

  2. Create a degraded 4 disk RAID 10 array with two missing disks (we will call the missing disks #2 and 4).

  3. Tell wife not to change/add any files she cares about

  4. Fail and remove one disk from the RAID 1 array (disk 4).

  5. Move physical extents from the RAID 1 array to the RAID 10 array leaving disk 2 empty.

  6. Kill the active RAID 1 array, add that now empty disk (disk 2) to the RAID 10 array, and wait for resync to complete.

  7. Add the first disk removed from RAID 1 (disk 4) to the RAID 10 array.

  8. Give wife go ahead.

At step 7 I think drive 1, 2, OR 4 can fail (during resync of disk 4) without killing the array. If drive 3 fails the data on the array is toast.


I did it with LVM. Initial configuration: - sda2, sdb2 - and created raid1 md1 on top. sda1 and sdb1 were used for second raid1 for /boot partition. - md1 was pv in volume group space, with some lvm's on it.

I've added disks sdc and sdd and created there partitions like on sda/sdb.

So:

  1. created md10 as:

    mdadm --create /dev/md10 --level raid10 --raid-devices=4 /dev/sdc2 missing /dev/sdd2

  2. extend vg on it:

    pvcreate /dev/md10 vgextend space /dev/md10

  3. moved volumes from md1 to md10:

    pvmove -v /dev/md1 /dev/md10

(wait for done) 4. reduce volume group:

vgreduce space /dev/md1
pvremove /dev/md1
  1. stop array md1:

    mdadm -S /dev/md1

  2. add disks from old md1 to md10:

    mdadm -a /dev/md10 /dev/sda2 /dev/sdb2

  3. update configuration in /etc/mdadm/mdadm.conf:

    mdadm -E --scan >>/dev/mdadm/mdadm.conf

(and remove there old md1)

Everything done on live system, with active volumes used for kvm's ;)


I have moved my raid1 to raid10 now and while this page helped me but there are some things missing in the answers above. Especially my aim was to keep ext4 birthtimes.

the setup was:

  • 2 raid1 disks of each type msdos and md0 with ext4 partition and mbr with msdos
  • 2 fresh new disks becoming the new primaries (all same size)
  • resulting in an 4 disks raid md127 ext4 but due to size i had to switch from mbr to gpt
  • its my home disk, so no bootmanager setup is required or intended
  • using my everyday ubuntu (so: not using the external rescue disc)
  • using gparted, dd and mdadm

as anyone stated before: the zero step should be backup and there can allways go something wrong in the process resulting in extreme dataloss

  1. BACKUP

  2. setup of the new raid

    1. create a new raid

      mdadm -v --create /dev/md127 --level=raid10 --raid-devices=4 /dev/sdb1 missing /dev/sde1 missing
      

      (i found that the layout is important .. the 2nd and 4th seem to be the duplicates in a default 'near' raid )

    2. set the partition of the raid i was using gparted setting up gpt on the md127 and then adding a new partition (ext4) of the size of the old one or greater
  3. migrate

    1. now getting the data over ... i was first trying to use rsync wich worked but failed to keep the birthtime ... use dd to clone from the old raid to the new one

      dd if=/dev/md0 of=/dev/md127p1 bs=1M conv=notrunc,noerror,sync
      

      WAIT FOR IT
      you can check with sending USR1 to that process

      kill -s USR1 <pid>
      
    2. fix the raid
      gparted is a great tool: you tell it to check&fix the partition and resize it to the full size of that disk with just a few mouseclicks ;)

    3. set a new uuid to that partition and update your fstab with it (change uuid)

    4. store your raid in conf

      mdadm --examine --scan  >> /etc/mdadm/mdadm.conf
      

      and remove the old one

      vim /etc/mdadm/mdadm.conf 
      
    5. reboot if youre not on a rescusystem
  4. destroying the old one

    1. fail the first one and add it to the new raid

      mdadm /dev/md0 --fail /dev/sdc1 --remove /dev/sdc1
      

      then make gpt on that device and set a new empty partition

      mdadm /dev/md127 --add /dev/sdc1
      

      WAIT FOR IT
      you can check with

      cat /proc/mdstat
      
    2. stop the second one

      mdadm --stop /dev/md0 
      

      then make gpt on that last device and set a new empty partition again

      mdadm /dev/md127 --add /dev/sdd1
      

      WAIT FOR IT again