How I can create LVM VDO volume on top of LVM RAID?
Im using LVM to setup my lower level storage with raid5 via lvcreate --type raid5 --size 2T -I 256K -i 3 -n my_lv my_vg
Now I want to setup VDO on the top of this using LVM too. Im aware of this man page but I tried lvconvert --type vdo-pool -V 20T my_vg/my_lv
just ending with no raid.
If I use PV on top of my raid LV to setup VDO would get a fully functional dmeventd stack with vdo?
You should have a RAID-protected LVO volumes, but lvm
itself is not so clear in reporting it.
I tried with a CentOS 8.2 box and a RAID1 LVM volume, converting it to a vdo-pool
type with the following command: lvconvert --type vdo-pool -n VDOLV -V 1G vg_test/lv_test
How to check if RAID1 is working? You need to pass some additional options to the command line. Executing lvs -o +seg_type -a
resulted in:
[root@localhost ~]# lvs -o +seg_type -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Type
root system -wi-ao---- 50.00g linear
swap system -wi-ao---- 7.90g linear
VDOLV vg_test vwi-a-v--- 1.00g lv_test 0.00 vdo
lv_test vg_test dwi------- 8.00g 37.62 vdo-pool
[lv_test_vdata] vg_test rwi-aor--- 8.00g 100.00 raid1
[lv_test_vdata_rimage_0] vg_test iwi-aor--- 8.00g linear
[lv_test_vdata_rimage_1] vg_test iwi-aor--- 8.00g linear
[lv_test_vdata_rmeta_0] vg_test ewi-aor--- 4.00m linear
[lv_test_vdata_rmeta_1] vg_test ewi-aor--- 4.00m linear
Note the raid1
segment type. And dmsetup table
shows (again, see the RAID rimage/rmeta
devices):
[root@localhost ~]# dmsetup table
vg_test-lv_test_vdata_rimage_1: 0 16777216 linear 230:16 10240
vg_test-lv_test-vpool: 0 2099200 vdo V2 /dev/dm-6 2097152 4096 32768 16380 on auto vg_test-lv_test-vpool maxDiscard 1 ack 1 bio 4 bioRotationInterval 64 cpu 2 hash 1 logical 1 physical 1
vg_test-lv_test_vdata: 0 16777216 raid raid1 3 0 region_size 4096 2 253:2 253:3 253:4 253:5
vg_test-VDOLV: 0 2097152 linear 253:7 1024
vg_test-lv_test_vdata_rimage_0: 0 16777216 linear 230:0 10240
vg_test-lv_test_vdata_rmeta_1: 0 8192 linear 230:16 2048
system-swap: 0 16572416 linear 8:2 104859648
vg_test-lv_test_vdata_rmeta_0: 0 8192 linear 230:0 2048
system-root: 0 104857600 linear 8:2 2048
Finally, I removed a block device and tryed to re-import the pool - it succeeded (with a warning about the missing device):
[root@localhost ~]# lvs -o +seg_type -a
WARNING: Couldn't find device with uuid 8jLeqt-TRKt-IVHy-JP0g-mAta-XL2k-cXpEdF.
WARNING: VG vg_test is missing PV 8jLeqt-TRKt-IVHy-JP0g-mAta-XL2k-cXpEdF (last written to /dev/zd16).
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Type
root system -wi-ao---- 50.00g linear
swap system -wi-ao---- 7.90g linear
VDOLV vg_test vwi-a-v-p- 1.00g lv_test 0.06 vdo
lv_test vg_test dwi-----p- 8.00g 37.63 vdo-pool
[lv_test_vdata] vg_test rwi-aor-p- 8.00g 100.00 raid1
[lv_test_vdata_rimage_0] vg_test iwi-aor--- 8.00g linear
[lv_test_vdata_rimage_1] vg_test Iwi-aor-p- 8.00g linear
[lv_test_vdata_rmeta_0] vg_test ewi-aor--- 4.00m linear
[lv_test_vdata_rmeta_1] vg_test ewi-aor-p- 4.00m linear
So, it should work. However, RAID and VDO are relatively recent addition to LVM (which, by the way, is growing in complexity) and care should be taken when mixing different segment type. For that reason I generally use plain mdadm
to create the software RAID array, layering LVM on top of it.
If you want, post the output of lvs -o +seg_type -a
to let me (and other) examine your LVM setup after creating the RAID and VDO volumes. Anyway, be sure to triple-check your RAID setup before putting any valuable data in your volumes.