Install Ubuntu 18.04 desktop with RAID 1 and LVM on machine with UEFI BIOS
Solution 1:
With some help from How to install Ubuntu server with UEFI and RAID1 + LVM, RAID set up in Ubuntu 18.04, and RAID support in Ubuntu 18.04 Desktop installer? and How to get rid of the "scanning for btrfs file systems" at start-up?, I managed to put together a working HOWTO using linux commands only.
In short
- Download the alternate server installer.
- Install with manual partitioning, EFI + RAID and LVM on RAID partition.
- Clone EFI partition from installed partition to the other drive.
- Install second EFI partition into UEFI boot chain.
- To avoid a lengthy wait during boot in case a drive breaks, remove the
btrfs
boot scripts.
In detail
1. Download the installer
- Download the alternate server installer from http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/
- Create a bootable CD or USB and boot the new machine from it.
- Select
Install Ubuntu Server
.
2. Install with manual partitioning
- During install, at the
Partition disks
step, selectManual
. - If the disks contain any partitions, remove them.
- If any logical volumes are present on your drives, select
Configure the Logical Volume Manager
.- Choose
Delete logical volume
until all volumes have been deleted. - Choose
Delete volume group
until all volume groups have been deleted.
- Choose
- If any RAID device is present, select
Configure software RAID
.- Choose
Delete MD device
until all MD devices have been deleted.
- Choose
- Delete every partition on the physical drives by choosing them and selecting
Delete the partition
.
- If any logical volumes are present on your drives, select
- Create physical partitions
- On each drive, create a 512MB partition (I've seen others use 128MB) at the beginning of the disk, Use as:
EFI System Partition
. - On each drive, create a second partition with 'max' size, Use as:
Physical Volume for RAID
.
- On each drive, create a 512MB partition (I've seen others use 128MB) at the beginning of the disk, Use as:
- Set up RAID
- Select
Configure software RAID
. - Select
Create MD device
, typeRAID1
, 2 active disks, 0 spare disks, and select the/dev/sda2
and/dev/sdb2
devices.
- Select
- Set up LVM
- Select
Configure the Logical Volume Manager
. - Create volume group
vg
on the/dev/md0
device. - Create logical volumes, e.g.
-
swap
at 16G -
root
at 35G -
tmp
at 10G -
var
at 5G -
home
at 200G
-
- Select
- Set up how to use the logical partitions
- For the
swap
partition, selectUse as: swap
. - For the other partitions, select
Use as: ext4
with the proper mount points (/
,/tmp
,/var
,/home
, respectively).
- For the
- Select
Finish partitioning and write changes to disk
. - Allow the installation program to finish and reboot.
If you are re-installing on a drive that earlier had a RAID configuration, the RAID creation step above might fail and you never get an md
device. In that case, you may have to create a Ubuntu Live USB stick, boot into that, run gparted
to clear all your partition tables, before you re-start this HOWTO.
3. Inspect system
-
Check which EFI partition has been mounted. Most likely
/dev/sda1
.mount | grep boot
-
Check RAID status. Most likely it is synchronizing.
cat /proc/mdstat
4. Clone EFI partition
The EFI bootloaded should have been installed on /dev/sda1
. As that partition is not mirrored via the RAID system, we need to clone it.
sudo dd if=/dev/sda1 of=/dev/sdb1
5. Insert second drive into boot chain
This step may not be necessary, since if either drive dies, the system should boot from the (identical) EFI partitions. However, it seems prudent to ensure that we can boot from either disk.
- Run
efibootmgr -v
and notice the file name for theubuntu
boot entry. On my install it was\EFI\ubuntu\shimx64.efi
. - Run
sudo efibootmgr -c -d /dev/sdb -p 1 -L "ubuntu2" -l \EFI\ubuntu\shimx64.efi
. Depending on your shell, you might have to escape the backslashes. - Verify with
efibootmgr -v
that you have the same file name for theubuntu
andubuntu2
boot items and that they are the first two in the boot order. - Now the system should boot even if either of the drives fail!
7. Wait
If you want to try to physically remove or disable any drive to test your installation, you must first wait until the RAID synchronization has finished! Monitor the progress with cat /proc/mdstat
However, you may perform step 8 below while waiting.
8. Remove BTRFS
If one drive fails (after the synchronization is complete), the system will still boot. However, the boot sequence will spend a lot of time looking for btrfs file systems. To remove that unnecessary wait, run
sudo apt-get purge btrfs-progs
This should remove btrfs-progs
, btrfs-tools
and ubuntu-server
. The last package is just a meta package, so if no more packages are listed for removal, you should be ok.
9. Install the desktop version
Run sudo apt install ubuntu-desktop
to install the desktop version. After that, the synchronization is probably done and your system is configured and should survive a disk failure!
10. Update EFI partition after grub-efi-amd64 update
When the package grub-efi-amd64
is updated, the files on the EFI partition (mounted at /boot/efi
) may change. In that case, the update must be cloned manually to the mirror partition. Luckily, you should get a warning from the update manager that grub-efi-amd64
is about to be updated, so you don't have to check after every update.
10.1 Find out clone source, quick way
If you haven't rebooted after the update, use
mount | grep boot
to find out what EFI partition is mounted. That partition, typically /dev/sdb1
, should be used as the clone source.
10.2 Find out clone source, paranoid way
Create mount points and mount both partitions:
sudo mkdir /tmp/sda1 /tmp/sdb1
sudo mount /dev/sda1 /tmp/sda1
sudo mount /dev/sdb1 /tmp/sdb1
Find timestamp of newest file in each tree
sudo find /tmp/sda1 -type f -printf '%T+ %p\n' | sort | tail -n 1 > /tmp/newest.sda1
sudo find /tmp/sdb1 -type f -printf '%T+ %p\n' | sort | tail -n 1 > /tmp/newest.sdb1
Compare timestamps
cat /tmp/newest.sd* | sort | tail -n 1 | perl -ne 'm,/tmp/(sd[ab]1)/, && print "/dev/$1 is newest.\n"'
Should print /dev/sdb1 is newest
(most likely) or /dev/sda1 is newest
. That partition should be used as the clone source.
Unmount the partitions before the cloning to avoid cache/partition inconsistency.
sudo umount /tmp/sda1 /tmp/sdb1
10.3 Clone
If /dev/sdb1
was the clone source:
sudo dd if=/dev/sdb1 of=/dev/sda1
If /dev/sda1
was the clone source:
sudo dd if=/dev/sda1 of=/dev/sdb1
Done!
11. Virtual machine gotchas
If you want to try this out in a virtual machine first, there are some caveats: Apparently, the NVRAM that holds the UEFI information is remembered between reboots, but not between shutdown-restart cycles. In that case, you may end up at the UEFI Shell console. The following commands should boot you into your machine from /dev/sda1
(use FS1:
for /dev/sdb1
):
FS0:
\EFI\ubuntu\grubx64.efi
The first solution in the top answer of UEFI boot in virtualbox - Ubuntu 12.04 might also be helpful.
Solution 2:
RAID-1 + XFS + UEFI
I was able to get about 99% of the way there with @Niclas Börlin's answer, thank you!
I also drew help from the following answers :
- Ubuntu 17.04 will not boot on UEFI system with XFS system partition
- How to install Ubuntu server with UEFI and RAID1 + LVM
Here are the ways I messed things up
- Having the BIOS in "Auto" mode, which allowed the USB-Key to be booted NOT in UEFI mode. This caused Grub not to be installed correctly. I switched mode to UEFI-only, rebooted and deleted all the logical volumes, raid groups, and partitions and started over. I further tried to re-install grub on the EFI partitions, which only made things worse.
- Having the
/boot
partition be on XFS. The grub2 that comes with Ubuntu 18.04LTS apparently does not handle this. Although that is not documented anywhere. I created a separate EXT-4/boot
partition. Note that this is on the RAID-1 LVM volume still, and not separate partitions like the EFI ones! Lots of older answers say this isn't possible, but it seems to be now. I ended up getting grub but getting unknown file system errors (eg. How to fix "error: unknown filesystem. grub rescue>) that gave me the clue XFS on/boot
as a no-go. - Somewhere in the middle of that I ended up with grub installed but a blank grub prompt, no grub menu. (eg. https://help.ubuntu.com/community/Grub2/Troubleshooting#Specific_Troubleshooting). This was due to
/boot
not being accessible.
What worked for me
Start with @Niclas Börlin's answer and change a few minor things.
Partition Table
I favor one large /
partition, so this reflects that choice. The main change is an EXT4 /boot
partition instead of an XFS one.
sda/
GPT 1M (auto-added)
sda1 - EFI - 512M
sda2 - MD0 - 3.5G
sdb/
GPT 1M (auto-added)
sdb1 - EFI - 512M
sdb2 - MD0 - 3.5G
md0/
vg/
boot - 1G - EXT4 /boot
swap - 16G - SWAP
root - rest - XFS /
After the completed install I was able to dd
the contents of sda1
to sdb2
as detailed in the other answer. I also was able to add the second drive to the boot chain using efibootmgr
as detailed.