Mounting disk(s) from RAID 1 array for data recovery

I'm having some problems with a Debian 7.5 stable (Wheezy) server. It is currently running at Rescue Mode.

It has a 2 x 2000 GB HDD running in a RAID 1 array.

My immediate priority is to be able to access and backup a specific directory (/home/servers/).

The problem is that I'm having a difficult time being able to mount the disk(s). I'd prefer to do this the clean way, using the RAID array, but anything is ok, as long as I can access the data because I will later reinstall everything and change for Ubuntu.

root@rescue:~# fdisk -l

WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1  3907029167  1953514583+  ee  GPT
Partition 1 does not start on physical sector boundary.

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1  3907029167  1953514583+  ee  GPT
Partition 1 does not start on physical sector boundary.

Disk /dev/md2: 1978.4 GB, 1978380779520 bytes
2 heads, 4 sectors/track, 483003120 cylinders, total 3864024960 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md2 doesn't contain a valid partition table

Disk /dev/md1: 21.5 GB, 21474770944 bytes
2 heads, 4 sectors/track, 5242864 cylinders, total 41942912 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md1 doesn't contain a valid partition table

First I tried the easy way:

root@rescue:~# mount /dev/sda1 sda1
mount: unknown filesystem type 'linux_raid_member'

Then I tried to follow this guide @ http://blog.sleeplessbeastie.eu/2012/05/08/how-to-mount-software-raid1-member-using-mdadm/

root@rescue:~# mdadm -A -R /dev/md9 /dev/sda1
mdadm: /dev/sda1 is busy - skipping

I restarted the server but /dev/sda1 is still busy. Now I'm stuck and this is a production server! Please help me, I don't know how to proceed from here.


UPDATE:

root@rescue:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md1 : active raid1 sda1[0] sdb1[1]
      20971456 blocks [2/2] [UU]

md2 : active raid1 sda2[0] sdb2[1]
      1932012480 blocks [2/2] [UU]

unused devices: <none>

.

root@rescue:~# cat /etc/fstab
# /etc/fstab: Information sur les systèmes de fichiers.
#
# <sys.fichiers><pt de montage><type> <options>  <dump> <pass>
proc    /proc   proc    defaults        0       0

Solution 1:

What happens if you try the following:

mkdir /mnt/md1
mount /dev/md1 /mnt/md1

mkdir /mnt/md2
mount /dev/md2 /mnt/md2

?

This will create empty folders to use as mount points and try to mount the raid filesystems if it can cleanly mount them (if it has any issues trying to mount them, then it will report the error and leave the drives untouched).

If those commands work, then your files will be at either /mnt/md1/servers/ or /mnt/md2/servers/ (Most likely the second one)


For future reference:

/proc/mdstat dumps a list of all active and inactive RAID arrays. For you, it shows that you have a 21.5GiB array (md1) built on /dev/sda1 and /dev/sdb1, and a second 1.9TiB array (md2) built on /dev/sda2 and /dev/sdb2. This is a fairly common partitioning scheme where the OS/Applications reside on a small partition (md1) and all the user data (/home/*) is stored on a separate, larger partition (md2). This makes it easy to wipe just the OS partition and reinstall without having to move a bunch of userdata around.

/etc/fstab lists all the default mount points in the system. This isn't very helpful here because you're running on a rescue system, but on a non-rescue system we'd see something like a mapping for /dev/md1 to / and /dev/md2 to /home/ (if the assumption about partitioning in the previous paragraph is true).

When a system boots up, it auto-detects RAID arrays and will start them if it finds all the drives (which is why they're already running, and why mdadm failed with a busy error when you tried to assemble a /dev/md9 array manually - the md1 array was already using it). However, since this is a rescue system, there was no mount information in /etc/fstab to tell the system to mount /dev/md1 and /dev/md2 somewhere - this is what the block of commands above attempts to do manually.