How to install Ubuntu server with UEFI and RAID1 + LVM

Solution 1:

Ok, I found the solution and can answer my own questions.

1) can I use LVM over RAID1 on a UEFI machine ?

Yes, definitely. And it will be able to boot even if one of the two disks fails.

2) How to do this ?

The're seem to be a bug in the installer, so just using the installer results in a failure to boot (grub shell).

Here is a working procedure:

1) manually create the following partitions on each of the two disks: - a 512MB partition with type UEFI a the beginning of the disk - a partition of type RAID after that

2) create your RAID 1 array with the two RAID partitions, then create your LVM volume group with that array, and your logical volumes (I created one for root, one for home and one for swap).

3) let the install go on, and reboot. FAILURE ! You should get a grub shell.

4) it might be possible to boot from the grub shell, but I choosed to boot from a rescue usb disk. In rescue mode, I opened a shell on my target root fs (that is the one on the root lvm logical volume).

5) get the UUID of this target root partition with 'blkid'. Note it down or take picture with your phone, you'll need it next step.

6) mount the EFI system partition ('mount /boot/efi') and edit the grub.cfg file: vi /boot/efi/EFI/ubuntu/grub.cfg Here, replace the erroneous UUID with the one you got at point 5. Save.

7) to be able to boot from the second disk, copy the EFI partition to this second disk: dd if=/dev/sda1 of=/dev/sdb1 (change sda or sdb with whatever suits your configuration).

8) Reboot. In your UEFI setting screen, set the two EFI partitions as bootable, and set a boot order.

You're done. You can test, unplug one or the other of the disks, it should work !

Solution 2:

I did this a little over a year ago myself, and, while I did have problems, didn't have the problems listed here. I'm not sure where I found the advice I did at the time, so I'll post what I did here.

1) Create 128MB efi partitions at the start (only one of which will mount, at /boot/efi)

2) Create 1 GB /boot RAID1 array, no LVM

3) Create large RAID1 array using LVM

Having /boot be on a separate partition/RAID1 array solves the issues of the efi partition being unable to find the appropriate things.

And for those looking for more detail, as I was at the time, this is, more precisely, how I have done my setup:

6x 3TB Drives

Have 4 RAID arrays:
/dev/md0 = 1GB RAID1 across 3 drives
   --> /boot (no LVM)
/dev/md1 = 500GB RAID1 across 3 drives
   LVM:
      --> /     =  40GB
      --> /var  = 100GB
      --> /home = 335GB
      --> /tmp  =  10GB

/dev/md2 = 500GB RAID1 across 3 drives (for VM's/linux containers)
   LVM:
      --> /lxc/container1 =  50GB
      --> /lxc/container2 =  50GB
      --> /lxc/container3 =  50GB
      --> /lxc/container4 =  50GB
      --> /lxc/extra      = 300GB (for more LXC's later)

/dev/md3 = 10TB RAID6 across 6 drives (for media and such)
   --> /mnt/raid6 (no LVM)


Disks are setup thus:

/sda => /boot/efi (128 MB) | /dev/md0 (1 GB) | /dev/md1 (500GB) | /dev/md3 (2.5TB)
/sdb => /boot/efi (128 MB) | /dev/md0 (1 GB) | /dev/md1 (500GB) | /dev/md3 (2.5TB)
/sdc => /boot/efi (128 MB) | /dev/md0 (1 GB) | /dev/md1 (500GB) | /dev/md3 (2.5TB)
/sdd => ----- left empty for simplicity ---- | /dev/md2 (500GB) | /dev/md3 (2.5TB)
/sde => ----- left empty for simplicity ---- | /dev/md2 (500GB) | /dev/md3 (2.5TB)
/sdf => ----- left empty for simplicity ---- | /dev/md2 (500GB) | /dev/md3 (2.5TB)

Note only one of the /boot/efi will actually mount, and the second two are clones; I did this because I waned to be able to have the machine still boot when losing any one of the 3 disks in the RAID1. I don't mind running in degraded mode if I still have full redundancy, and that gives me time to replace the drive while the machine still is up.

Also, if I did not have the second RAID1 array for putting the LXC containers and basically all the databases and such, /var would have to have been MUCH bigger. Having each LXC as its own logical volume was, however, a nice solution to prevent one VM/website from disrupting the others due to out-of-control error logs, for example...

And final note, I installed from the Ubuntu Alternate Install USB with 12.04.01 (before 12.04.02 came out), and everything worked quite nicely. After banging my head against it for 72 hours.

Hope that helps somebody!

Solution 3:

I had the same probem, efi boot with two disks and software raid

/dev/sda

  • /dev/sda1 - 200MB efi partition
  • /dev/sda2 - 20G physical for raid
  • /dev/sda3 - 980G physical for raid

/dev/sdb

  • /dev/sdb1 - 200MB efi partition
  • /dev/sdb2 - 20G physical for raid
  • /dev/sdb3 - 980G physical for raid

Swap on /dev/md0 (sda2 & sdb2) Root on /dev/md1 (sda3 & sdb3)

If you enter the grub-rescue shell, boot using:

set root=(md/1)
linux /boot/vmlinuz-3.8.0-29-generic root=/dev/md1
initrd /boot/initrd.img-3.8.0-29-generic
boot

After that, download this patch file - https://launchpadlibrarian.net/151342031/grub-install.diff (as explained on https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1229738)

cp /usr/sbi/grub-install /usr/sbi/grub-install.backup
patch /usr/sbin/grub-install patch
mount /dev/sda1 /boot/efi
grub-install /dev/sda1
umount /dev/sda1
mount /dev/sdb1 /boot/efi
grub-install /dev/sdb1
reboot