pvresize doesn't seem to resize after increasing the size of the underlying block device

partprobe /dev/vda

man partprobe

NAME partprobe - inform the OS of partition table changes

SYNOPSIS partprobe [-d] [-s] [devices...]


I was running into this issue on a CentOS 7 guest system. In my case I had increased the ZFS ZVOL size and didn't see any change in the guest and the pvresize would not change it. I ended up booting into SystemRescueCD 4.4.0 and used "parted" with the resizepart command. In CentOS I had parted 3.1, and this command was not available. Looks like parted 3.2 is in SysRescCD now, which worked.

After boot into the sysresc iso, run parted /dev/ and use the following as an example :

resizepart 2 37.6G

Where 2 is the partition number, and desired new larger size was 37.6G.

After that, while I was still in the boot iso, I ran the pvresize and it worked correctly. Reboot into the VM (or your system) and all looked good from there. :) Hope that helps!


In my case, I had a volume of 60GB, and I extended it to 110GB.

After resizing the disk from AWS console, then when running df -kh, the system shows the new size of the disk, as expected:

nvme3n1                         259:4    0  110G  0 disk
└─nvme3n1p1                     259:5    0   60G  0 part
  └─vg_user01-lv_user01 253:0    0   60G  0 lvm  /home/user01

In a normal case, the next step would be expanding the physical volume, /dev/nvme3n1p1, but the command pvresize did not reflect the new extra space as expected.

sudo pvresize -v /dev/nvme3n1p1
    Archiving volume group "vg_user01" metadata (seqno 22).
    Resizing volume "/dev/nvme3n1p1" to 125827072 sectors.
    No change to size of physical volume /dev/nvme3n1p1.
    Updating physical volume "/dev/nvme3n1p1"
    Creating volume group backup "/etc/lvm/backup/vg_user01" (seqno 23).
  Physical volume "/dev/nvme3n1p1" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized

After investigating the issue, it looks like this is a Kernel issue, that normally the new Kernels would detect this extra space automatically. I saw that some, advise rebooting the instance, but this also did not work.

To solve this issue, we need to run the growpart on the target disk partition.

[root@ec2-basic user01]# growpart /dev/nvme3n1 1
CHANGED: partition=1 start=2048 old: size=125827072 end=125829120 new: size=230684639 end=230686687

Then, we can see the changes reflected:

[root@ec2-basic user01]# lsblk
NAME                            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
nvme3n1                         259:4    0  110G  0 disk
└─nvme3n1p1                     259:5    0  110G  0 part
  └─vg_user01-lv_user01 253:0    0   60G  0 lvm  /home/user01

Then we need to check the LVM path:

[root@ec2-basic user01]# sudo lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg_user01/lv_user01
....

Tell LVM to extend the logical volume to use all of the new partition size:

[root@ec2-basic user01]# lvextend -l +100%FREE /dev/vg_user01/lv_user01
  Size of logical volume vg_user01/lv_user01 changed from <60.00 GiB (15359 extents) to <110.00 GiB (28159 extents).
  Logical volume vg_user01/lv_user01 successfully resized.

Finally, we extend the filesystem:

[root@ec2-basic user01]# resize2fs  /dev/vg_user01/lv_user01
resize2fs 1.42.9 (28-Dec-2013)
Filesystem at /dev/vg_user01/lv_user01 is mounted on /home/user01; on-line resizing required
old_desc_blocks = 8, new_desc_blocks = 14
The filesystem on /dev/vg_user01/lv_user01 is now 28834816 blocks long.

[root@aws-test user01]# lsblk
NAME                            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
nvme3n1                         259:4    0  110G  0 disk
└─nvme3n1p1                     259:5    0  110G  0 part
  └─vg_user01-lv_user01 253:0    0  110G  0 lvm  /home/user01