How do I convert a RAID 4 array to RAID 0?
I had a three-disk RAID 0 array and ran the following to add a fourth disk:
mdadm --manage /dev/md127 --add /dev/xvdi
Each disk is a 1TB EC2 volume. The array took about 40 hours to reshape. About 1 hour through, reshaping stopped and the volume became inaccessible. I restarted the machine and reshaping continued then finished seemingly successfully, but the array level is now reported as RAID 4 and the useable capacity hasn't changed.
mdadm --detail /dev/md127
now reports the following:
/dev/md127:
Version : 1.2
Creation Time : Wed Jul 1 22:26:36 2015
Raid Level : raid4
Array Size : 4294965248 (4096.00 GiB 4398.04 GB)
Used Dev Size : 1073741312 (1024.00 GiB 1099.51 GB)
Raid Devices : 5
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Oct 11 07:40:48 2015
State : clean, degraded
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Name : [removed]
UUID : [removed]
Events : 63530
Number Major Minor RaidDevice State
0 202 160 0 active sync /dev/xvdk
1 202 144 1 active sync /dev/xvdj
2 202 80 2 active sync /dev/xvdf
4 202 128 3 active sync /dev/xvdi
4 0 0 4 removed
My aim here is to have a 4TB RAID 0 array. I don't need redundancy since I backup by taking volume snapshots in AWS. I'm running Ubuntu Server 14.04.3.
How do I switch to RAID 0, without losing any data, taking into account the fact that the state is clean, degraded
?
Solution 1:
You can change the current configuration directly to a RAID with mdadm -G -l 0 /dev/md127
. Since a RAID 4 with only 4 of 5 members is essentially a RAID 0 without a parity stripe, the conversion will occur instantly. If there was a parity member, it would be dropped, but since it's already listed as "Removed", it will simply be dropped, Raid Devices decremented to 4, and state should be "clean".
From the mdadm query printed above, you can see that the member size is 1TB and the volume size is 4TB, so the volume should be usable as is, even without the parity member. You will then need to grow the partition with parted and perform filesystem resize operations per usual.
Solution 2:
I know this is old, but these steps could be helpful to folks.
How to add disks to RAID-0?
Env:
- centos 7 (kernel: 3.10.0-327.22.2.el7.x86_64)
- mdadm version v3.4 - 28th January 2016
- First 3 disks of 10GB each
- Fourth disk also 10GB
Initial setup:
$ sudo mdadm --create --verbose /dev/md0 --level=0 --name=DB_RAID2 --raid-devices=3 /dev/xvdh /dev/xvdi /dev/xvdj
$ sudo mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Tue Sep 5 14:25:10 2017
Raid Level : raid0
Array Size : 31432704 (29.98 GiB 32.19 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Tue Sep 5 14:25:10 2017
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Name : temp:DB_RAID2 (local to host temp)
UUID : e8780813:5adbe875:ffb0ab8a:05f1352d
Events : 0
Number Major Minor RaidDevice State
0 202 112 0 active sync /dev/xvdh
1 202 128 1 active sync /dev/xvdi
2 202 144 2 active sync /dev/xvdj
$ sudo mkfs -t ext4 /dev/md0
$ sudo mount /dev/md0 /mnt/test
Add a disk to RAID-0 in one-step (doesn't work):
$ sudo mdadm --grow /dev/md0 --raid-devices=4 --add /dev/xvdk
mdadm: level of /dev/md0 changed to raid4
mdadm: added /dev/xvdk
mdadm: Failed to initiate reshape!
Probably, this fails due to this bug.
Step-1: Convert to RAID-4:
$ sudo mdadm --grow --level 4 /dev/md0
mdadm: level of /dev/md0 changed to raid4
$ cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4]
md0 : active raid4 xvdj[2] xvdi[1] xvdh[0]
31432704 blocks super 1.2 level 4, 512k chunk, algorithm 5 [4/3] [UUU_]
unused devices: <none>
Step-2: Add a disk:
$ sudo mdadm --manage /dev/md0 --add /dev/xvdk
mdadm: added /dev/xvdk
Wait till recovers:
$ cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4]
md0 : active raid4 xvdk[4] xvdj[2] xvdi[1] xvdh[0]
31432704 blocks super 1.2 level 4, 512k chunk, algorithm 5 [4/3] [UUU_]
[=>...................] recovery = 8.5% (893572/10477568) finish=3.5min speed=44678K/sec
$ cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4]
md0 : active raid4 xvdk[4] xvdj[2] xvdi[1] xvdh[0]
31432704 blocks super 1.2 level 4, 512k chunk, algorithm 5 [4/4] [UUUU]
unused devices: <none>
Step-3: Convert to RAID-0 back:
$ sudo mdadm --grow --level 0 --raid-devices=4 /dev/md0
$
Wait till it reshapes:
$ cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4]
md0 : active raid4 xvdk[4] xvdj[2] xvdi[1] xvdh[0]
31432704 blocks super 1.2 level 4, 512k chunk, algorithm 5 [5/4] [UUUU_]
[===>.................] reshape = 16.2% (1702156/10477568) finish=6.1min speed=23912K/sec
$ cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4]
md0 : active raid0 xvdk[4] xvdj[2] xvdi[1] xvdh[0]
41910272 blocks super 1.2 512k chunks
Step-4: Resize the Filesystem:
$ sudo mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Tue Sep 5 14:25:10 2017
Raid Level : raid0
Array Size : 41910272 (39.97 GiB 42.92 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Tue Sep 5 14:55:46 2017
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Name : temp:DB_RAID2 (local to host temp)
UUID : e8780813:5adbe875:ffb0ab8a:05f1352d
Events : 107
Number Major Minor RaidDevice State
0 202 112 0 active sync /dev/xvdh
1 202 128 1 active sync /dev/xvdi
2 202 144 2 active sync /dev/xvdj
4 202 160 3 active sync /dev/xvdk
$ df -h
/dev/md0 30G 45M 28G 1% /mnt/test
Actual resize and after resize:
$ sudo resize2fs /dev/md0
resize2fs 1.42.9 (28-Dec-2013)
Filesystem at /dev/md0 is mounted on /mnt/test; on-line resizing required
old_desc_blocks = 4, new_desc_blocks = 5
The filesystem on /dev/md0 is now 10477568 blocks long.
$ df -h /dev/md0
Filesystem Size Used Avail Use% Mounted on
/dev/md0 40G 48M 38G 1% /mnt/test