On google you can find a lot of website who talk about the performance of differents filesystem with KVM.

Take a look at this one : ZFS, BTRFS, XFS, EXT4 and LVM with KVM – a storage performance comparison

According to the author Gionatan Danti :

The tested scenarios are:

1) Qcow2 backend on top of XFS filesystem on top of a raw MD device. Both thin and partial (metadata only) preallocation modes were benchmarked;

2) Logical Volumes backend, both in classical LVM (fat preallocation) and thin (thin lvm target) modes. Moreover, thin lvm was analized with both zeroing on and off;

3) raw images on XFS and EXT4 on top of classical LVM, relaying on filesystem sparse-file support for thin provisioning;

4) raw images on XFS and EXT4 on top of thin LVM, relaying on thin lvm target for thin provisioning. In this case, LVM zeroing was disabled as the to-be-zero blocks are directly managed inside the filesystem structures;

5) raw images BTRFS on top of its mirror+stripe implementation (no MD here). I benchmarked BTRFS with CoW both enabled and disabled (nodatacow mount option)

6) raw images ZFS on top of its mirror+stripe implementation (no MD again)

He conclude by :

For VMs storage, stay well away from BTRFS: not only it is marked a “Tech Preview” from RedHat (read: not 100% production ready), but it is very slow when used as a VM images store.

Another blog talk about BTRFS, you can read on a lot of forums that Copy On Write (COW) need to be disable for getting better performances with KVM.

Chris Irwin talk about the benefits of BTRFS and talk about an alternative :

There are other tools, or you could roll your own cron-job. So what about ZFS? I thought ZFS did all these things?

Yes, it does Why not just use ZFS?

Go ahead

link : live with btrfs

Another way to know if it's okay for your use is testing by yourself if the performance is good and if it is reliable without copy on write.

If BTRFS is not the best for you, you can try ZFS. You have the same Backup fonctionality and lot of other improvements but it a bit tricky to implement in linux.


My KVM orchestration solution of choice, oVirt, uses LVM volumes handed as raw disks to VMs for maximum performance, scalability and flexibility. You can do both qcow2 and LVM snapshots. If you are building a new storage solution and want to try something SDS-ish and fancy, you could go with Ceph and RBD-access to volumes instead.