Difference between Xen PV, Xen KVM and HVM?

Solution 1:

Xen supported virtualization types

Xen supports running two different types of guests. Xen guests are often called as domUs (unprivileged domains). Both guest types (PV, HVM) can be used at the same time on a single Xen system.

Xen Paravirtualization (PV)

Paravirtualization is an efficient and lightweight virtualization technique introduced by Xen, later adopted also by other virtualization solutions. Paravirtualization doesn't require virtualization extensions from the host CPU. However paravirtualized guests require special kernel that is ported to run natively on Xen, so the guests are aware of the hypervisor and can run efficiently without emulation or virtual emulated hardware. Xen PV guest kernels exist for Linux, NetBSD, FreeBSD, OpenSolaris and Novell Netware operating systems.

PV guests don't have any kind of virtual emulated hardware, but graphical console is still possible using guest pvfb (paravirtual framebuffer). PV guest graphical console can be viewed using VNC client, or Redhat's virt-viewer. There's a separate VNC server in dom0 for each guest's PVFB.

Upstream kernel.org Linux kernels since Linux 2.6.24 include Xen PV guest (domU) support based on the Linux pvops framework, so every upstream Linux kernel can be automatically used as Xen PV guest kernel without any additional patches or modifications.

See XenParavirtOps wiki page for more information about Linux pvops Xen support.

Xen Full virtualization (HVM)

Fully virtualized aka HVM (Hardware Virtual Machine) guests require CPU virtualization extensions from the host CPU (Intel VT, AMD-V). Xen uses modified version of Qemu to emulate full PC hardware, including BIOS, IDE disk controller, VGA graphic adapter, USB controller, network adapter etc for HVM guests. CPU virtualization extensions are used to boost performance of the emulation. Fully virtualized guests don't require special kernel, so for example Windows operating systems can be used as Xen HVM guest. Fully virtualized guests are usually slower than paravirtualized guests, because of the required emulation.

To boost performance fully virtualized HVM guests can use special paravirtual device drivers to bypass the emulation for disk and network IO. Xen Windows HVM guests can use the opensource GPLPV drivers. See XenLinuxPVonHVMdrivers wiki page for more information about Xen PV-on-HVM drivers for Linux HVM guests.

This is from http://wiki.xenproject.org/wiki/XenOverview

KVM is not Xen at all, it is another technology, where KVM is a Linux native kernel module and not an additional kernel, like Xen. Which makes KVM a better design. the downside here is that KVM is newer than Xen, so it might be lacking some of the features.

Solution 2:

Xen is an hypervisor that runs on metal (the pc / server) and then hosts virtual machines called domains.

A Xen PV domain is a paravirtualized domain, that means the operating system (usually we're talking linux here) has been modified to run under Xen, and there's no need to actually emulate hardware. This should be the most efficient way to go, performance wise.

A Xen HVM domain is hardware emulated domain, that means the operating system (could be Linux, Windows, whatever) has not been modified in any way and hardware gets emulated. This is rather slow, so usually you install PV drivers in the guest os for critical hardware (usually disk and network), so the guest as a whole will run fully virtualized but the most performance-critical pieces of hardware will run paravirtualized. Recent linux systems have pv drivers for both disk and network in the kernel, and there exist various PV drivers for Windows too. With all the development on HVM in recent years there usually is little difference in performance between HVM and PV for standard workloads.

KVM is not Xen, it is another virtualization platform built inside the Linux kernel. From a guest point of view it resembles Xen HVM: the guest runs fully virtualized and there are specific driver to run some parts paravirtualized (again, disk and network).

Both Xen HVM and Linux KVM need hardware assisted virtualization support (Intel VT-x, AMD AMD-V), whereas Xen PV does not but can't run operating systems without PV support (you can't run Windows on Xen PV).

Both Xen HVM and Linux KVM will use parts of the qemu virtualization software to emulate actual hardware for devices not using PV drivers in the guest system.

Xen (both PV and HVM) can do live migration of a running guest from one physical server to another, I don't know if KVM can too.

Both Xen and KVM cannot overcommit memory so you usually get "true RAM", while other platforms like VMware can swap part of the guest ram to disk.

There are differences but usually apply to specific installations and not to the generic virtual private server for sale to other people. For example recent Xen hypervisors support transcendent memory that could improve memory utilization and guest performance if the guest has support for it (linux kernels >= 3.something).

All those technologies will give you a great experience if they are implemented correctly, and will not make a big difference from your point of view. Of course, there are a thousand ways things can go wrong and that's not related to the specific virtualization solution (i.e., your guest could be stored on slow disks and that would hurt your performance).