KVM/qemu - use LVM volumes directly without image file?
- qemu-kvm can use LVs as virtual disks instead of files. this is quite a common use case actually.
- libguestfs (and just look for a set of
virt-*
tools) can provide access to guest filesystems in a cleaner way than anything you remount to the host directly, though both are possible. - Online FS resizing is not a feature of kvm, but something the guest OS should be capable of.
resize2fs
will work in a VM as well as it does on physical hardware, the only problem being the guest redetecting the size changes. Tryvirt-resize
as the standard tool, butlvresize
andqemu-img
can also easily be used (though in offline mode, requiring a guest restart usually).
I think lvresize
with resize2fs
will actually work without a guest restart, but I haven't tried it yet
I use qemu-kvm+libvirt with exactly the configuration you're asking about, for the reasons you listed, but additionally because I get much better performance without the KVM host's filesystem layer in scope. If you add the VG as a 'storage pool' in virt-manager, you can create such VMs using its user-friendly wizard. (But I just write the XML by hand these days using an existing VM as a template).
Here's sanitised output of 'virsh dumpxml' for one of my guests:
<domain type='kvm'>
<name>somevm</name>
<uuid>f173d3b5-704c-909e-b597-c5a823ad48c9</uuid>
<description>Windows Server 2008 R2</description>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-1.1'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='custom' match='exact'>
<model fallback='allow'>Nehalem</model>
<vendor>Intel</vendor>
<feature policy='require' name='tm2'/>
<feature policy='require' name='est'/>
<feature policy='require' name='monitor'/>
<feature policy='require' name='smx'/>
<feature policy='require' name='ss'/>
<feature policy='require' name='vme'/>
<feature policy='require' name='dtes64'/>
<feature policy='require' name='rdtscp'/>
<feature policy='require' name='ht'/>
<feature policy='require' name='ds'/>
<feature policy='require' name='pbe'/>
<feature policy='require' name='tm'/>
<feature policy='require' name='pdcm'/>
<feature policy='require' name='vmx'/>
<feature policy='require' name='ds_cpl'/>
<feature policy='require' name='xtpr'/>
<feature policy='require' name='acpi'/>
</cpu>
<clock offset='localtime'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/bin/kvm</emulator>
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/vg1/somevm'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>
<controller type='usb' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='ide' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<interface type='bridge'>
<mac address='00:00:00:00:00:00'/>
<source bridge='br0'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='tablet' bus='usb'/>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes'/>
<video>
<model type='vga' vram='9216' heads='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</memballoon>
</devices>
<seclabel type='none' model='none'/>
</domain>
Another thought (not relevant to your question but it might help): if you can, make sure you're using the 'paravirtualised' network, block, random, clock etc drivers - they're significantly faster than the fully virtualised ones. This is the "model=virtio" stuff above. You have to load driver modules into the host's kernel such as virtio_net.
Here is output of 'virsh pool-dumpxml vg1':
<pool type='logical'>
<name>vg1</name>
<uuid>9e26648e-64bc-9221-835f-140f6def0556</uuid>
<capacity unit='bytes'>3000613470208</capacity>
<allocation unit='bytes'>1824287358976</allocation>
<available unit='bytes'>1176326111232</available>
<source>
<device path='/dev/md1'/>
<name>vg1</name>
<format type='lvm2'/>
</source>
<target>
<path>/dev/vg1</path>
<permissions>
<mode>0700</mode>
</permissions>
</target>
</pool>
I don’t know of a way of exactly replicating the Xen behaviour you describe.
However, you can use kpartx
to expose the partitions within an LV that contains a whole-disk image as block devices on the host, which you can then mount, etc.
See my answer to my own question on this issue at KVM booting off-image kernel and existing partition. In short, getting virt-install to create a config for this is pretty straight-forward, given a slight modification of guest /etc/fstab.