Incredibly low KVM disk performance (qcow2 disk files + virtio)
I'm having some serious disk performance problems while setting up a KVM guest. Using a simple dd
test, the partition on the host that the qcow2 images reside on (a mirrored RAID array) writes at over 120MB/s, while my guest gets writes ranging from 0.5 to 3MB/s.
- The guest is configured with a couple of CPUs and 4G of RAM and isn't currently running anything else; it's a completely minimal install at the moment.
- Performance is tested using
time dd if=/dev/zero of=/tmp/test oflag=direct bs=64k count=16000
. - The guest is configured to use virtio, but this doesn't appear to make a difference to the performance.
- The host partitions are 4kb aligned (and performance is fine on the host, anyway).
- Using writeback caching on the disks increases the reported performance massively, but I'd prefer not to use it; even without it performance should be far better than this.
- Host and guest are both running Ubuntu 12.04 LTS, which comes with qemu-kvm 1.0+noroms-0ubuntu13 and libvirt 0.9.8-2ubuntu17.1.
- Host has the deadline IO scheduler enabled and the guest has noop.
There seem to be plenty of guides out there tweaking kvm performance, and I'll get there eventually, but it seems like I should be getting vastly better performance than this at this point in time so it seems like something is already very wrong.
Update 1
And suddenly when I go back and test now, it's 26.6 MB/s; this is more like what I expected w/qcrow2. I'll leave the question up in case anyone has any ideas as to what might have been the problem (and in case it mysteriously returns again).
Update 2
I stopped worrying about qcow2 performance and just cut over to LVM on top of RAID1 with raw images, still using virtio but setting cache='none' and io='native' on the disk drive. Write performance is now appx. 135MB/s using the same basic test as above, so there doesn't seem to be much point in figuring out what the problem was when it can be so easily worked around entirely.
Well, yeah, qcow2 files aren't designed for blazingly fast performance. You'll get much better luck out of raw partitions (or, preferably, LVs).
How to achieve top performance with QCOW2:
qemu-img create -f qcow2 -o preallocation=metadata,compat=1.1,lazy_refcounts=on imageXYZ
The most important one is preallocation which gives nice boost, according to qcow2 developers. It is almost on par with LVM now! Note that this is usually enabled in modern (Fedora 25+) Linux distros.
Also you can provide unsafe cache if this is not production instance (this is dangerous and not recommended, only good for testing):
<driver name='qemu' cache='unsafe' />
Some users reports that this configuration beats LVM/unsafe configuration in some tests.
For all these parameters latest QEMU 1.5+ is required! Again, most of modern distros have these.
I achieved great results for qcow2 image with this setting:
<driver name='qemu' type='raw' cache='none' io='native'/>
which disables guest caches and enables AIO (Asynchronous IO). Running your dd
command gave me 177MB/s on host and 155MB/s on guest. The image is placed on same LVM volume where host's test was done.
My qemu-kvm
version is 1.0+noroms-0ubuntu14.8
and kernel 3.2.0-41-generic
from stock Ubuntu 12.04.2 LTS.