Logical volumes are inactive at boot time
I resized my logical volume and filesystem and all went smoothly. I installed new kernel and after reboot I can't boot neither current nor former one. I get volume group not found error after selecting grub(2) option. Inspection from busy box reveals the volumes are not registered with device mapper and that they are inactive. I wasn't able to mount them after activating, I got file not found error (mount /dev/mapper/all-root /mnt).
Any ideas how to proceed or make them active at the boot time? Or why the volumes are all of sudden inactive at boot time?
Regards,
Marek
EDIT: Further investigation revealed that this had nothing to do with the resizing of logical volumes. The fact that logical volumes had to be activated manually in ash shell after failed boot and possible solution to this problem is covered in my reply below.
Solution 1:
So I managed to solve this eventually. There is a problem (bug) with detecting logical volumes, which is some sort of race condition (maybe in my case regarding the fact that this happens inside KVM). This is covered in the following discussion. In my particular case (Debian Squeeze ) the solution is as follows:
- backup the script /usr/share/initramfs-tools/scripts/local-top/lvm2
- apply the patch from mentioned bug report
- run update-initramfs -u
This helped me, hope it'll help others (strangely, this is not part of mainstream yet).
Link to patch: _http://bugs.debian.org/cgi-bin/bugreport.cgi?msg=10;filename=lvm2_wait-lvm.patch;att=1;bug=568838
Below is a copy for posterity.
--- /usr/share/initramfs-tools/scripts/local-top/lvm2 2009-08-17 19:28:09.000000000 +0200
+++ /usr/share/initramfs-tools/scripts/local-top/lvm2 2010-02-19 23:22:14.000000000 +0100
@@ -45,12 +45,30 @@
eval $(dmsetup splitname --nameprefixes --noheadings --rows "$dev")
- if [ "$DM_VG_NAME" ] && [ "$DM_LV_NAME" ]; then
- lvm lvchange -aly --ignorelockingfailure "$DM_VG_NAME/$DM_LV_NAME"
- rc=$?
- if [ $rc = 5 ]; then
- echo "Unable to find LVM volume $DM_VG_NAME/$DM_LV_NAME"
- fi
+ # Make sure that we have non-empty volume group and logical volume
+ if [ -z "$DM_VG_NAME" ] || [ -z "$DM_LV_NAME" ]; then
+ return 1
+ fi
+
+ # If the logical volume hasn't shown up yet, give it a little while
+ # to deal with LVM on removable devices (inspired from scripts/local)
+ fulldev="/dev/$DM_VG_NAME/$DM_LV_NAME"
+ if [ -z "`lvm lvscan -a --ignorelockingfailure |grep $fulldev`" ]; then
+ # Use default root delay
+ slumber=$(( ${ROOTDELAY:-180} * 10 ))
+
+ while [ -z "`lvm lvscan -a --ignorelockingfailure |grep $fulldev`" ]; do
+ /bin/sleep 0.1
+ slumber=$(( ${slumber} - 1 ))
+ [ ${slumber} -gt 0 ] || break
+ done
+ fi
+
+ # Activate logical volume
+ lvm lvchange -aly --ignorelockingfailure "$DM_VG_NAME/$DM_LV_NAME"
+ rc=$?
+ if [ $rc = 5 ]; then
+ echo "Unable to find LVM volume $DM_VG_NAME/$DM_LV_NAME"
fi
}
Solution 2:
Create a startup script in /etc/init.d/lvm
containing the following:
#!/bin/sh
case "$1" in
start)
/sbin/vgscan
/sbin/vgchange -ay
;;
stop)
/sbin/vgchange -an
;;
restart|force-reload)
;;
esac
exit 0
Then execute the commands:
chmod 0755 /etc/init.d/lvm
update-rc.d lvm start 26 S . stop 82 1 .
Should do the trick for Debian systems.
Solution 3:
If vgscan
"finds" the volumes, you should be able to activate them with vgchange -ay /dev/volumegroupname
$ sudo vgscan
[sudo] password for username:
Reading all physical volumes. This may take a while...
Found volume group "vg02" using metadata type lvm2
Found volume group "vg00" using metadata type lvm2
$ sudo vgchange -ay /dev/vg02
7 logical volume(s) in volume group "vg00" now active
I am not sure what would cause them to go inactive after a reboot though.