RAID1: How do I "Fail" a drive that's marked as "removed"?
You shouldn't need to fail them. Since they should have already been failed when you first noticed the issue and the RAID members are now removed. There are just a few steps to get it back up and running.
-
Setup partitions on the replacement disk. These partitions should be identical in size to that of the failed and currently active disk, and should be marked as partition type "Linux RAID Autodetect" (0xFD). You can simplify this by copying the partition table with
sfdisk
.sfdisk -d /dev/sdb | sfdisk /dev/sda
-
If the disk has been used before then you may want to ensure that any existing softRAID information is removed before you begin again.
mdadm --zero-superblock /dev/sda
-
Install an MBR onto the new disk so that it is bootable. Do this from the
grub
shell. Assumes that/dev/sda
is the first disk.root (hd0,0) setup (hd0) quit
-
Add new partitions back to the arrays.
mdadm --add /dev/md0 /dev/sda1 mdadm --add /dev/md1 /dev/sda3 mdadm --add /dev/md2 /dev/sda2
-
Monitor the status of their reconstruction by viewing
/proc/mdstat
. You can automate this with.watch -n10 cat /proc/mdstat
Check http://techblog.tgharold.com/2009/01/removing-failed-non-existent-drive-from.shtml. Use
mdadm /dev/mdX -r detached