Ubuntu Server 14.04 - RAID5 created with mdadm disappears after reboot
This is my first question on superuser, so if I forgot to mention something, please ask.
I'm trying to set up a home server which will be used as file server and media server. I installed Ubuntu Server 14.04 and now I'm trying to set up a Raid5 consisting of a total of 5 disks, using mdadm. After the raid has been created, I am able to use it and I can also access the Raid from other PCs. After rebooting the server, the Raid does not show up anymore. I have also not been able to assemble the raid.
I have done the following steps:
Create the RAID
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=5 /dev/sda /dev/sdc /dev/sdd /dev/sde /dev/sdf
After the RAID has been completed (watch cat /proc/mdstat
), I store the RAID configurations
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
Then I removed some parts of the entry in mdadm.conf. The resulting file looks as follows:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
#DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
# This file was auto-generated on Fri, 14 Mar 2014 23:38:10 +0100
# by mkconf $Id$
ARRAY /dev/md0 UUID=b73a8b66:0681239e:2c1dd406:4907f892
A check if the RAID is working (mdadm --detail /dev/md0
) returns the following:
/dev/md0:
Version : 1.2
Creation Time : Sat Apr 19 15:49:03 2014
Raid Level : raid5
Array Size : 7813531648 (7451.56 GiB 8001.06 GB)
Used Dev Size : 1953382912 (1862.89 GiB 2000.26 GB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent
Update Time : Sat Apr 19 22:13:37 2014
State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : roembelHomeserver:0 (local to host roembelHomeserver)
UUID : c29ca6ea:951be1e7:ee0911e9:32b215c8
Events : 67
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 32 1 active sync /dev/sdc
2 8 48 2 active sync /dev/sdd
3 8 64 3 active sync /dev/sde
5 8 80 4 active sync /dev/sdf
As far as I can tell, this all looks good. In a next step I created the file system:
mke2fs -t ext4 /dev/md0
This results in the following output:
mke2fs 1.42.8 (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=512 blocks
244174848 inodes, 1953382912 blocks
97669145 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
59613 block groups
32768 blocks per group, 32768 fragments per group
4096 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544, 1934917632
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Then I changed to fstab by adding the following entry at the end of the file:
/dev/md0 /data ext4 defaults,nobootwait,no fail 0 2
After mounting the RAID (mount -a
) I then could use it, create files, access it from other PCs...
Now comes the problem:
After rebooting the server (reboot now
), the RAID does not exist anymore, i.e.
- No /dev/md0
- empty /proc/mdstat (besides the Personalities)
- df -h does not show the raid
- mdadm --assemble --scan does not do anything
Does anyone have any suggestions? Did I do something wrong?
Solution 1:
Sounds like you forgot one step - telling initramfs to load your array on boot. All your steps were correct and in chronological order, but it sounds like you missed that final step. But given the fact I dunno what your server's current status is, I suggest you try the following:
Boot up and type
mdadm --detail --scan
Do you see anything? If so, your array is there and it should work (i.e. solution below prolly won't help). I'm guessing when you reboot, you are not seeing your RAID drive at all. If that is true,
Make sure MDADM daemon is running ps aux | grep mdadm
This will show you if any MDADM processes are running (if you see no result, start MDADM)
Make sure array is mounted
mount -a
Update initramfs
update-initramfs -u
Verify MDADM is not running a sync or rebuild
watch cat /proc/mdstat
If there's anything processing, let it finish first lest you screw up your array
Reboot and test