How to re-add accidentally removed hard drive in RAID5

I have a NAS on Ubuntu Server with 4 2TB hard drives in RAID 5. A couple of weeks ago, one of the hard drives died, but my RAID was working, although degraded. Luckily it was still under warranty and I was sent a new hard drive which I installed today. However, when trying to add the new hard drive into the RAID, it was not rebuilding. So I unplugged the hard drive and rebooted the machine. However, I accidentally set one of my OTHER hard drives in the RAID to fail and removed it using mdadm.

Now it says my RAID has two removed hard drives. I still have my 3rd hard drive with all my data still intact, but I don't know how to re-add it back into the RAID array, so it's back to a good (although degraded) state, so I can continue to add the 4th hard drive and rebuild the array. Is it possible to just have Ubuntu realize that the 3rd hard drive has my data and just have it recognized as part of the array again?

When I try to run:

sudo mdadm --manage /dev/md127 --re-add /dev/sdd1 

It says:

mdadm: --re-add for /dev/sdd1 to dev/md127 is not possible

Please, any help that anyone can give would be much, much appreciated.


You might need to just do an --add and not a --re-add. if you read the man page about --re-add it talks about re-adding the device if the event count is close to the rest of the devices. you can use --examine to find this out.

$ mdadm --examine /dev/sd[a-z]1 | egrep 'Event|/dev/sd'
mdadm: No md superblock detected on /dev/sda1.
/dev/sdb1:
         Events : 992655
/dev/sdd1:
         Events : 992655
/dev/sde1:
         Events : 992655
/dev/sdf1:
         Events : 992655
/dev/sdg1:
         Events : 158
/dev/sdh1:
         Events : 992655
/dev/sdj1:
         Events : 992655

as you can see my /dev/sdh1 device has not been in the array for some time and --re-add will not work and you will have to do an --add and for a recovery of the array.

you can use mdadm --detail /dev/md126 to watch what is happening, might not be bad idea to run this before you do anything, after all this is your data!

$ mdadm --detail /dev/md126
/dev/md126:
        Version : 1.2
  Creation Time : Tue Jun 24 05:17:47 2014
     Raid Level : raid6
     Array Size : 14650158080 (13971.48 GiB 15001.76 GB)
  Used Dev Size : 2930031616 (2794.30 GiB 3000.35 GB)
   Raid Devices : 7
  Total Devices : 7
    Persistence : Superblock is persistent

    Update Time : Thu Nov  6 05:47:56 2014
          State : clean, degraded, recovering
 Active Devices : 6
Working Devices : 7
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 0% complete

           Name : omegacentauri:0  (local to host omegacentauri)
           UUID : 9fdcacc0:14f7ef3c:a6931b47:bfb8b4a1
         Events : 992656

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       49        1      active sync   /dev/sdd1
       2       8       65        2      active sync   /dev/sde1
       3       8       81        3      active sync   /dev/sdf1
       4       8       97        4      active sync   /dev/sdg1
       5       8      145        5      active sync   /dev/sdj1
       7       8      113        6      spare rebuilding   /dev/sdh1

or you can use this too:

$ cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md126 : active raid6 sdh1[7] sdg1[4] sdj1[5] sdf1[3] sdd1[1] sde1[2] sdb1[0]
      14650158080 blocks super 1.2 level 6, 512k chunk, algorithm 2 [7/6] [UUUUUU_]
      [>....................]  recovery =  0.9% (26657536/2930031616) finish=1162.5min speed=41624K/sec

md127 : active (auto-read-only) raid1 sdi[1] sdc[0]
      1465007360 blocks super 1.2 [2/2] [UU]

I am not responsible for any of your lost data.


(initially posted in a comment by the OP)

I think I was able to get it back to a degraded state.

I was able to use the mdadm --assemble --force command in the documentation and I believe it got it back to a situation where at least 3 out of the 4 drives are working.

For anyone in the future who comes across this issue, this is the command I used (assuming the 3 working drives are sdb, sdc, sdd, each with single partitions of sdb1, sdc1, sdd1:

mdadm --assemble --force /dev/md127 /dev/sdb1 /dev/sdc1 /dev/sdd1