Difference between KVM and LXC
What is the difference between KVM and Linux Containers (LXCs)? To me it seems, that LXC is also a way of creating multiple VMs within the same kernel if we use both "namespaces" and "control groups" features of kernel.
Solution 1:
Text from https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/7-Beta/html/Resource_Management_and_Linux_Containers_Guide/sec-Linux_Containers_Compared_to_KVM_Virtualization.html Copyright © 2014 Red Hat, Inc.:
Linux Containers Compared to KVM Virtualization
The main difference between the KVM virtualization and Linux Containers is that virtual machines require a separate kernel instance to run on, while containers can be deployed from the host operating system. This significantly reduces the complexity of container creation and maintenance. Also, the reduced overhead lets you create a large number of containers with faster startup and shutdown speeds. Both Linux Containers and KVM virtualization have certain advantages and drawbacks that influence the use cases in which these technologies are typically applied:
KVM virtualization
KVM virtualization lets you boot full operating systems of different kinds, even non-Linux systems. However, a complex setup is sometimes needed. Virtual machines are resource-intensive so you can run only a limited number of them on your host machine.
Running separate kernel instances generally means better separation and security. If one of the kernels terminates unexpectedly, it does not disable the whole system. On the other hand, this isolation makes it harder for virtual machines to communicate with the rest of the system, and therefore several interpretation mechanisms must be used.
Guest virtual machine is isolated from host changes, which lets you run different versions of the same application on the host and virtual machine. KVM also provides many useful features such as live migration. For more information on these capabilities, see Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide.
Linux Containers:
The current version of Linux Containers is designed primarily to support isolation of one or more applications, with plans to implement full OS containers in the near future. You can create or destroy containers very easily and they are convenient to maintain.
System-wide changes are visible in each container. For example, if you upgrade an application on the host machine, this change will apply to all sandboxes that run instances of this application.
Since containers are lightweight, a large number of them can run simultaneously on a host machine. The theoretical maximum is 6000 containers and 12,000 bind mounts of root file system directories. Also, containers are faster to create and have low startup times.
source
Solution 2:
This whitepaper gives the difference between the hypervisor and linux containers and also some history behind the containers http://sp.parallels.com/fileadmin/media/hcap/pcs/documents/ParCloudStorage_Mini_WP_EN_042014.pdf
An excerpt from the paper: a hypervisor works by having the host operating system emulate machine hardware and then bringing up other virtual machines (VMs) as guest operating systems on top of that hardware. This means that the communication between guest and host operating systems must follow a hardware paradigm (anything that can be done in hardware can be done by the host to the guest).
On the other hand, container virtualization (shown in figure 2), is virtualization at the operating system level, instead of the hardware level. So each of the guest operating systems shares the same kernel, and sometimes parts of the operating system, with the host. This enhanced sharing gives containers a great advantage in that they are leaner and smaller than hypervisor guests, simply because they're sharing much more of the pieces with the host. It also gives them the huge advantage that the guest kernel is much more efficient about sharing resources between containers, because it sees the containers as simply resources to be managed.
An example: Container 1 and Container 2 open the same file, the host kernel opens the file and puts pages from it into the kernel page cache. These pages are then handed out to Container 1 and Container 2 as they are needed, and if both want to read the same position, they both get the same page. In the case of VM1 and VM2 doing the same thing, the host opens the file (creating pages in the host page cache) but then each of the kernels in VM1 and VM2 does the same thing, meaning if VM1 and VM2 read the same file, there are now three separate pages (one in the page caches of the host, VM1 and VM2 kernels) simply because they cannot share the page in the same way a container can. This advanced sharing of containers means that the density (number of containers of Virtual Machines you can run on the system) is up to three times higher in the container case as with the Hypervisor case.
Summary: KVM is a Hypervisor based on emulating virtual hardware. Containers, on the other hand, are based on shared operating systems and is skinnier. But this poses a limitation on the containers that that we are using a single shared kernel and hence cant run Windows and Linux on the same shared hardware
Solution 3:
LXC, or Linux Containers are the lightweight and portable OS based virtualization units which share the base operating system's kernel, but at same time act as an isolated environments with its own filesystem, processes and TCP/IP stack. They can be compared to Solaris Zones or Jails on FreeBSD. As there is no virtualization overhead they perform much better then virtual machines.
KVM represents the virtualization capabilities built in the own Linux kernel. As already stated in the previous answers, it's the hypervisor of type 2, i.e. it's not running on a bare metal.