How to recover logical volume deleted with lvremove

LVM does backup it's metadata to /etc/lvm/backup and /etc/lvm/archive. At the top of each file it will tell you the time/data when the file was generated so chances are you'll have a copy of the older metadata as it was before you deleted the LV. I believe the backup is automatic anytime the metadata changes.

The following can be dangerous and destructive so be very careful and if possible have a full backup.

The command to restore these volumegroup metadata backups is vgcfgrestore. Make sure you make a current copy of the existing working configuration using the vgcfgbackup command with the -f flag to specify a different file for the output so that you don't alter any files that are in the /etc/lvm/backup or /etc/lvm/archive folders. Make sure you diff the current configuration with the configuration you wish to restore to verify that the only changes you're about to apply are to recreate the recently deleted LV. Having a full backup of your data probably isn't a bad idea either. You may also want to consider contacting your Linux vendor for support/guidance if you're under a support contract before proceeding as I've never had to do this myself.

Good luck.


" Could you please be more specific abut finding EFROM and ETO from the backup file? All the lv have a "start_extend" from 0 in my backup file so I'm a bit lost :) Thanks! – user186975 Aug 24 '13 at 17:06 "

Ok, I will be very specific...with the Most simple way to recover a logical volume.

Example:

1 - I have removed my logical volume!

$ sudo lvremove /dev/vg1/debian.root

2 - First thing to do is, look for the archive file at /etc/lvm/archive/vg1_(xxxxx).vg. I can do that, just looking the date that i have removed the logical volume!

$ sudo ls -l /etc/lvm/archive |more

3- I found it!

-rw------- 1 root root 16255 Mar 20 14:29 vg1_00223-235991429.vg
-rw------- 1 root root 16665 Mar 20 16:49 vg1_00224-748876387.vg
-rw------- 1 root root 17074 Mar 20 16:49 vg1_00225-931666169.vg
-rw------- 1 root root 17482 Mar 20 16:50 vg1_00226-1238302012.vg
-rw------- 1 root root 18081 **Mar 20 21:57 vg1_00227-2048533959.vg**

Date where i did the lvremove!!!...it was a minutes ago..

4 - Let's see the file!

$ sudo head /etc/lvm/archive/vg1_00227-2048533959.vg
*# Generated by LVM2 version 2.02.95(2) (2012-03-06): Thu Mar 20 21:57:58 2014
contents = "Text Format Volume Group"
version = 1
description = **"Created *before* executing 'lvremove /dev/vg1/debian.root'"**
creation_host = "server"    # Linux server 3.8.0-35-generic #50-Ubuntu SMP Tue Dec 3 01:24:59 UTC 2013 x86_64
creation_time = 1395363478  # Thu Mar 20 21:57:58 2014*

5 - Make a test before recover it!

$ sudo vgcfgrestore vg1 --test -f /etc/lvm/archive/vg1_00227-2048533959.vg
Test mode: Metadata will NOT be updated and volumes will not be
(de)activated.   **Restored volume group vg1**

6 - Ok, now repeat the command line, without the (--test)

$ sudo vgcfgrestore vg1 -f /etc/lvm/archive/vg1_00227-2048533959.vg
**Restored volume group vg1**

7 - Check it!

$ sudo lvscan |grep debian
ACTIVE            '/dev/vg1/debian.root' [7,81 GiB] inherit

8 - If the logical was not active, do it!

$ sudo lvchange -a y /dev/vg1/debian.root 

It's All

I hope this can help other people who are looking for this solution!


The easiest thing to recover from lvremove (assuming you didn't write to the extents the LV was residing in) is:

Just find the backup of your metadata in /etc/lvm/archive and figure out

a) which extents the LV was residing in (EFROM, ETO)
b) which PVs your LV was residing on and which extends on that PV it was using (PFROM, PTO)

After you have this info you create a new LV of the exactly same size on the exactly same PV extends without wiping the first 8kB of the LV:

lvcreate --extents EFROM-ETO --zero n --name customer007 YOUR-VG-NAME /dev/yourpv:PFROM-PTO

(As answered by thermoman earlier) the easies way to recreate a deleted LVM volume is by creating it with lvcreate without zeroing and making sure it will be in the same position on the disk. (The command from thermoman's answer didn't work.)

Check the size and position of the deleted logical volume as they were before deletion by reading the files in /etc/lvm/archive. The size of the volume is in extent_count of the segment1 (or sum of segment*/extent_count values if it had several extents). Position is in the stripessection after the physical volume alias (e.g. pv0).

For example, the volume section might look like this:

    physical_volumes {
            pv0 {
                    device = "/dev/somedisk" # Hint only
                    ...
            }
    }

    logical_volumes {
            ...
            example {
                    ...
                    segment_count = 1

                    segment1 {
                            start_extent = 0
                            extent_count = 1024     # 4 Gigabytes

                            type = "striped"
                            stripe_count = 1        # linear

                            stripes = [
                                    "pv0", 30720
                            ]
                    }
            }
            ...
    }

The size of this example volume was 1024 and it was located on /dev/somedisk starting from extent 30720.

Calculate the last extent as start + size -1 = 30720 + 1024 - 1 = 31743. To recreate that volume issue following:

lvcreate --extents 1024 --zero n --name example vgname /dev/somedisk:30720-31743

I had a similar situation. I had all of the PVs containing the desired LVs, but my VG showed missing PVs and 0 LVs. I recovered by doing the following:

  1. Become root
  2. Run pvs to collect UUIDs for all the drives.
  3. Review the files in /etc/lvm/archive until I found one that listed all of the same UUIDs.
  4. Make a working copy of archived config file, and start editing.
  5. In the physical_volumes section, set the device = lines to match the current device/UUIDs reported by pvs, clear any "MISSING" flags, and remove any pvN sections that actually were missing.
  6. In the logical_volumes section, remove any listings that had stripes on the pvN sections that no longer existed.
  7. That was it, then I ran

    vgcfgrestore --test vg -f /root/dangerously_edited.vg

  8. When that worked, I re-ran without the --test option.

I achieved my particular situation by extending the VG with PVs sdg and sdh. Then I created a new LV, specifying /dev/sdg /dev/sdh on the command line so that I knew the new LV was on those drives. Then I moved just those drives to a new machine. The old machine was very upset about the missing drives, and when I force-removed them, it also removed ALL the LVs. Bummer.

Next time, of course, I'll create a new VG to avoid this problem.