Adding Disks With LVM
After reviewing a combination of random guides and tutorials on the net, I was able to successfully add a disk to my Ubuntu Server 14.04 machine, and essentially set it up so I have multiple hard drives appearing as one single drive. To do this, I used LVM.
To help anyone else who might want to do this at some point, I will post here what I did.
These steps assume that you are essentially starting from scratch except having already installed Ubuntu on your machine (via "Guided - use the entire disk and setup LVM"), and physically added the additional disk. These steps may work if you have existing data on the machine but I can't say for sure if it would be safe to do this.
These commands assume the following information, and will vary depending on your setup:
- Your new disk is 'sdb'
- This can be found by running
ls /dev/sd*
- This can be found by running
- That your volume group name is 'ubuntu-vg'
- This can be found by running
vgdisplay
- This can be found by running
- That your logical volume path is '/dev/ubuntu-vg/root'
- This can be found by running
lvdisplay
- This can be found by running
- Your new disk is 20GB
- Hopefully you know how big the disk is.
-
Install Logical Volume Manager (you may or may not need to do this).
sudo apt-get install system-config-lvm
-
Convert your new disk to a physical volume (in this case, the new disk is 'sdb').
sudo pvcreate /dev/sdb
-
Add the physical volume to the volume group via 'vgextend'.
sudo vgextend ubuntu-vg /dev/sdb
-
Allocate the physical volume to a logical volume (extend the volume size by your new disk size).
sudo lvextend -l +100%FREE /dev/ubuntu-vg/root
-
Resize the file system on the logical volume so it uses the additional space.
sudo resize2fs /dev/ubuntu-vg/root
That should do it. Five simple steps! You also don't have to reboot. Just run df -h
and your new disk space should show allocated correctly, as well as any webapps you may be running will pick up the new disk space amount.
I attempted to set up a large LVM disk in 14.04 64 bit Desktop with 3X500GB SATA drives. It failed during the installation with device errors. I found a link that states drives over 256G are the limit of the extents but I dont know if that applies here.
I also attempted to set up RAID (RAID 1 /boot 300MB, RAID 0 swap 2GB, and / RAID 5 everything else. More failures.
$ sudo apt-get install -y mdadm
From the Live CD "Try Ubuntu Without Installing" option you can still install MDADM. Still no luck. The GParted detection seems to be slightly re-Tahrded and doesnt pick up some volumes in LVM or some volumes in RAID /dev/mdX unless everything has been given a filesystem already;
$ sudo mkfs.etx4 /dev/md2
Also, the RAID configs present even more challenges now. MDADM doesnt seem to be added to the /target/usr/sbin package list of the install any more, and installing it there so the installation starts on reboot at all would be a huge ordeal, for which I simply dont have the time or patience, only to find out that a few more hours of work later it still didnt start on these new Windows 8 performance hacked motherboards (UEFI) for a GRUB issue.
Installing LVM from Ubiquity works great, until you need to add more disks to the / (root partition, at which point you stand a very good chance of blowing the entire install. LVM resize operations keep failing and you end up back at square 1 again.
Trying the 14.04 server installer Partman saves the day.
Booted up the 14.04 Server installer, it identified the architectures just fine, installed MDADM, grub was installed to all 3 disks, and everything works great.
3 disks (500GB SATA)
3 partitions each. All partitions set to Linux Raid type in fdisk.
RAID 1 /boot, 300MB partitions, RAID 0 swap, 2GB partitions, and RAID 5 /, 500GB (whatever is left.)
$ sudo fdisk -l
Device Boot Start End Blocks Id System
/dev/sda1 2048 616447 307200 83 Linux
/dev/sda2 616448 4810751 2097152 83 Linux
/dev/sda3 4810752 976773167 485981208 fd Linux raid autodetectDevice Boot Start End Blocks Id System
/dev/sdc1 * 2048 616447 307200 83 Linux
/dev/sdc2 616448 4810751 2097152 83 Linux
/dev/sdc3 4810752 976773167 485981208 fd Linux raid autodetectDevice Boot Start End Blocks Id System
/dev/sdb1 2048 616447 307200 83 Linux
/dev/sdb2 616448 4810751 2097152 83 Linux
/dev/sdb3 4810752 976773167 485981208 fd Linux raid autodetect
...$ sudo ls /dev/md*
/dev/md0 /dev/md1 /dev/md2/dev/md:
0 1 2$ sudo mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Aug 6 13:03:01 2014
Raid Level : raid1
Array Size : 306880 (299.74 MiB 314.25 MB)
Used Dev Size : 306880 (299.74 MiB 314.25 MB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistentUpdate Time : Mon Aug 11 19:51:44 2014 State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0Name : ubuntu:0 UUID : 03a4f230:82f50f13:13d52929:73139517 Events : 19
Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1
$ sudo mdadm -D /dev/md1 /dev/md1: Version : 1.2 Creation Time : Wed Aug 6 13:03:31 2014 Raid Level : raid0 Array Size : 6289920 (6.00 GiB 6.44 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent
Update Time : Wed Aug 6 13:03:31 2014 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0
Chunk Size : 512K Name : ubuntu:1 UUID : 9843bdd3:7de01b63:73593716:aa2cb882 Events : 0
Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 18 1 active sync /dev/sdb2 2 8 34 2 active sync /dev/sdc2
$ sudo mdadm -D /dev/md2 /dev/md2: Version : 1.2 Creation Time : Wed Aug 6 13:03:50 2014 Raid Level : raid5 Array Size : 971699200 (926.68 GiB 995.02 GB) Used Dev Size : 485849600 (463.34 GiB 497.51 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent
Update Time : Mon Aug 11 19:54:49 2014 State : active Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0
Layout : left-symmetric Chunk Size : 512K Name : ubuntu:2 UUID : 6ead2827:3ef088c5:a4f9d550:8cd86a1a Events : 14815
Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 3 8 35 2 active sync /dev/sdc3
$ sudo cat /etc/fstab
'# /etc/fstab: static file system information.'
'#'
'# Use 'blkid' to print the universally unique identifier for a'
'# device; this may be used with UUID= as a more robust way to name devices'
'# that works even if disks are added and removed. See fstab(5).'
'#'
'# '
'# / was on /dev/md126 during installation'
UUID=2af45208-3763-4cd2-b199-e925e316bab9 / ext4 errors=remount-ro 0 1
'# /boot was on /dev/md125 during installation'
UUID=954e752b-30e2-4725-821a-e143ceaa6ae5 /boot ext4 defaults 0 2
'# swap was on /dev/md127 during installation'
UUID=fb81179a-6d2d-450d-8d19-3cb3bde4d28a none swap sw 0 0
Running like a thoroughbred now.
It occurs to me that if you are using 32 bit hardware this doesn't work for you, but I think at this point soft RAID might be a worse choice than just single disk LVM for anything smaller, and JBOD for anything older than this anyway.
Thanks.