VPS: How can I update the available hard disk space after upgrade?

Solution 1:

xfs_growfs -d /dev/vda1

Capital -D grows to specified size in filesystem blocks, and it doesn't understand 'G'. As such it assumed you wanted 53 filesystem blocks, which failed.

Lower case -d grows to maximum size.

If you want specific size, you should calculate it in the blocks, e.g. from fdisk output the maximum size is 26213376. Then -D 26213376 will also grow it to the maximum from 6553344 blocks that you have now.

Solution 2:

Here is the step-by-step working solution I've just tested upgrading from OVH VPS 2016 SSD 1 (10GB) to OVH VPS 2016 SSD 2 (20GB) with partition grow to the maximum new size.

The environment is with CentOS 7 with its default XFS filesystem.

After resizing the new partition is bootable with all the data in place.

Step 0. Upgrade to a higher VPS Plan

Perform the upgrade at OVH Dashboard.

Can't be in Rescue Mode while performing the upgrade.

Step 1. Boot into rescue mode

root@rescue-pro:~# lsblk
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda    254:0    0  10G  0 disk 
└─vda1 254:1    0  10G  0 part /
vdb    254:16   0  20G  0 disk 
└─vdb1 254:17   0  10G  0 part /mnt/vdb1

The above shows that vdb has 20GB after the upgrade, and the original partition vdb1 has 10GB mounted at /mnt/vdb1

Step 2. Install tools to be used

root@rescue-pro:/# apt-get update

root@rescue-pro:/# apt-get install xfsprogs

root@rescue-pro:/# apt-get install parted

The rescue mode doesn't come with tools, xfs_growfs to grow the XFS filesystem.

Will be using parted to resize the underlying partition to new size before we can grow the filesystem.

Step 3. Resizing the underlying partition

root@rescue-pro:~# umount /mnt/vdb1

Will need to unmount the partition before we can apply changes.

root@rescue-pro:~# lsblk
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda    254:0    0  10G  0 disk 
└─vda1 254:1    0  10G  0 part /
vdb    254:16   0  20G  0 disk 
└─vdb1 254:17   0  10G  0 part 

Verify it's been unmounted.

root@rescue-pro:~# parted

At this point the rescue device vda is selected, we need to switch to the device we'll be working on.

(parted) select /dev/vdb
Using /dev/vdb

(parted) print
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  10.7GB  10.7GB  primary  xfs          boot

(parted) unit s

Switch the display unit to sector

(parted) print
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 41943040s
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start  End        Size       Type     File system  Flags
 1      2048s  20971519s  20969472s  primary  xfs          boot

(parted) rm 1

The above will REMOVE the existing partition.

This is the part I'm most hesitated to perform.

After lots of research and confirmation that it won't destroy the data, we'll get everything back.

(parted) mkpart
Partition type? primary
File system type? xfs
Start? 2048s
End? 100%

The above will recreate the partition with maximum size of the drive space allocation.

Answer the questions accordingly from the print result above.

2048s is the start sector, that's why we switch unit to sector, make sure it's the same as the print result above.

(parted) print
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 41943040s
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start  End        Size       Type     File system  Flags
 1      2048s  41943039s  41940992s  primary  xfs

Verify the new partition table.

Note that the boot flag is missing.

(parted) set 1 boot on

(parted) print
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 41943040s
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start  End        Size       Type     File system  Flags
 1      2048s  41943039s  41940992s  primary  xfs          boot

Set the boot flag and print the partition table out again to verify.

(parted) quit

Quit and apply all the changes.

You'll see the following note that you can ignore, as the partition number is the same after resizing.

Information: You may need to update /etc/fstab.

Step 4. Verify Resized Partition

root@rescue-pro:~# lsblk
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda    254:0    0  10G  0 disk 
└─vda1 254:1    0  10G  0 part /
vdb    254:16   0  20G  0 disk 
└─vdb1 254:17   0  20G  0 part 

Now we can see vdb1 is at the full size 20GB

Mount the partition back and check disk space.

root@rescue-pro:~# mount /dev/vdb1 /mnt/vdb1

root@rescue-pro:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs          9.9G  608M  8.8G   7% /
udev             10M     0   10M   0% /dev
tmpfs           388M  144K  388M   1% /run
/dev/vda1       9.9G  608M  8.8G   7% /
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           775M     0  775M   0% /run/shm
/dev/vdb1        10G  2.1G  8.0G  21% /mnt/vdb1

We can see that the mounted partition is back, all the data is in place, but the size is still 10GB

Step 5. GROW THE XFS PARTITION

root@rescue-pro:~# xfs_growfs -d /mnt/vdb1
meta-data=/dev/vdb1              isize=256    agcount=6, agsize=524224 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=2621184, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 2621184 to 5242624

The above command grow the /mnt/vdb1 to the maximum size available.

Use the mount point instead of the block device.

root@rescue-pro:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs          9.9G  608M  8.8G   7% /
udev             10M     0   10M   0% /dev
tmpfs           388M  144K  388M   1% /run
/dev/vda1       9.9G  608M  8.8G   7% /
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           775M     0  775M   0% /run/shm
/dev/vdb1        20G  2.1G   18G  11% /mnt/vdb1

Check the disk space again, and we can see now /mnt/vdb1 has successfully grown to 20GB

Step 6. Final step - reboot and exit rescue mode

shutdown -h now

Go back to the OVH Dashboard and use Reboot VPS to exit the rescue mode.

After booting back to normal VPS environment.

[root@vps]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        20G  2.1G   18G  11% /
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G   17M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
tmpfs           386M     0  386M   0% /run/user/0

[root@vps]# lsblk
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda    253:0    0  20G  0 disk 
└─vda1 253:1    0  20G  0 part /

Verify the above showing that the root partition has been successfully resized to full 20GB

I find that there's no complete documentation about how to perform this root partition resizing after upgrading your OVH VPS.

XFS filesystem is what's making it tricky.

Hope this step-by-step notes will help anyone facing the same issue.

Solution 3:

XFS file system resize in OVH is really easy to do. You do not have to use rescue mode. Just use the growpart and xfs_grow commands.

See how I resize the root partition from 10GB to 20GB

enter image description here