Restoring an Amazon EBS RAID0 array from snapshots taken with ec2-consistent-snapshot
since you're striping data across the volumes, it would stand to reason that you have to put each NEW volume in the same location on the RAID as the volume from which the snapshot was created.
I tested your premise, and logical as it may seem, the observation is otherwise.
Let me detail this:
I have the exact same requirement as you do. However, the RAID0 that I am using has only 2 volumes.
I'm using Ubuntu 10 and have 2 EBS devices forming a RAID0 device formatted with XFS.
The raid0 device was creating using the following command:sudo mdadm --create /dev/md0 --level 0 --metadata=1.1 --raid-devices 2 /dev/sdg /dev/sdh
I've installed MYSQL and a bunch of other software that are configured to use /dev/md0 to store their data files.
Using the same volumes:
Once done, I umount everything, stop the Raid and reassemble it like so:
sudo mdadm --assemble /dev/md0 /dev/sdh /dev/sdg
The thing is that irrespective of the order of /dev/sdg /dev/sgh
, the RAID reconstitutes itself correctly.
Using snapshots:
Post this, I use ec2-consistent-snapshot
to create snapshots of the 2 EBS disks together. I then create volumes from this disk, attach it to a new instance (that has been configured for the software already), reassemble the RAID (I've tried interchanging the order of the EBS volumes too), mount it and I'm ready to go.
Sounds strange, but it works.
I run a similar configuration (RAID0 over 4 EBS volumes), and consequently had the same concerns to reconstitute the RAID array from snapshots created with ec2-consistent-snapshot.
Fortunately, each device in a raid array contains metadata (in a superblock) that records its position in the array, the UUID of the array and the level of array (e.g. RAID0). To query this superblock on any device run the following command (the line matching '^this' describes the queried device):
$ sudo mdadm --examine /dev/sdb1
/dev/sdb1:
Magic : a92b4efc
Version : 00.90.00
UUID : 2ca96b4a:9a1f1fbd:2f3c176d:b2b9da7c
Creation Time : Mon Mar 28 23:31:41 2011
Raid Level : raid0
Used Dev Size : 0
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Mon Mar 28 23:31:41 2011
State : active
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : ed10058a - correct
Events : 1
Chunk Size : 256K
Number Major Minor RaidDevice State
this 0 202 17 0 active sync /dev/sdb1
0 0 202 17 0 active sync /dev/sdb1
1 1 202 18 1 active sync /dev/sdb2
2 2 202 19 2 active sync /dev/sdb3
3 3 202 20 3 active sync /dev/sdb4
If you do the same query on a device which is not part of an array, you obtain:
$ sudo mdadm --examine /dev/sda1
mdadm: No md superblock detected on /dev/sda1.
Which proves that this command really relies on information stored on the device itself and not some configuration file.
One can also examine the devices of a RAID array starting from the RAID device, retrieving similar information:
$ sudo mdadm --detail /dev/md0
I use the later along with ec2-describe-volumes to build the list of volumes for ec2-consistent-snaptshot (-n and --debug allow to test this command without creating snapshots). The following command assumes that the directory /mysql is the mount point for the volume and that the AWS region is us-west-1:
$ sudo -E ec2-consistent-snapshot --region us-west-1 --mysql --freeze-filesystem /mysql --mysql-master-status-file /mysql/master-info --description "$(date +'%Y/%m/%d %H:%M:%S') - ASR2 RAID0 (4 volumes) Snapshot" --debug -n $(ec2-describe-volumes --region us-west-1 | grep $(wget http://169.254.169.254/latest/meta-data/instance-id -O - -q) | egrep $(sudo mdadm --detail $(awk '{if($2=="/mysql") print $1}' /etc/fstab) | awk '/ \/dev\//{printf "%s ", $7}' | sed -e 's# /#|/#g') | awk '{printf "%s ", $2}')