Performance impact of running Linux in a virtual machine in Windows?

I'd like to know what performance impact I could expect running Linux in a virtual machine in Windows. The job I need Linux for is heavy and almost non-stop code compilation with GCC. Dual-boot doesn't look like a very attractive solution, so I'm counting on low VM overhead right now (10-20% would be fine for me, but 50% or more will be unacceptable). Did anyone try to measure the performance difference, are there any comparison tables? What virtual machine with the lowest overhead possible will you suggest?

My host OS is Win7 and I've got a modern Core i7 with VT-x present.

Thanks!


Solution 1:

Caveat: the following is based on my subjective observations, rather than proper objective testing.

Disk I/O for some load patterns are going to be at the top end of your 10-20% bracket, perhaps a little higher, but not near the 50% you state. You can mitigate the I/O hit a number of ways, including:

  • Giving the VM plenty of RAM for the OS to use as cache (this means the host machine having plenty of RAM of course otherwise it will starve and start swapping both itself and the VM)
  • Tell the VM manager not to let any RAM allocated to the VM swap if it can possibly help it
  • Make sure the VM hosts temporary storage in RAM by having /tmp and similar mounted as tmpfs filesystems.
  • Make a good choice of on-disk filesystems and related tweaks in the VM. If all the code and compiler output is held elsewhere too, i.e. in source control and backups, then do away with journalling to reduce writing activity. Also if your filesystem is ext2/3/4 use the noatime option if your build process is fine with this (most, if not all, are) or relatime if noatime is not possible (most distributions now default to relatime, I think).
  • Similarly, if your code and output is safely copied elsewhere, tell your VM manager that it is OK to buffer writes to the VM's vdisks.
  • If you don't feel safe with the riskier options above (possible delayed write caching, no FS journals) you can tweak your build process to use RAM for temporary storage, by making it use that tmpfs mounted /tmp as much as possible.
  • Have the VM's vdisks on a separate drive if you can, that way if it does need to perform a sizeable chunk of I/O operations it won't be competing as much with the host OS and other VMs.
  • Make sure the VM is using fixed-size virtual disks. Growable disks increase the IO performance hit, sometimes considerably.

The I/O hit is going to dwarf any CPU hit somewhat, though there is some performance drop here, too. Unlike I/O though, there is very little you can do about it. With a modern CPU and a virtualisation product that can take advantage of the CPU's explicit virtualisation support the difference for completely CPU bound operations should be no more than a couple of percent. Try to avoid virtual SMP - this can actually be slower than giving the VM a single vCPU because of the way physical CPU time gets scheduled. In VMWare at least a guest with 2 vCPUs must wait until two cores are free in order to get a time-slice from the host scheduler - if the host is under load on top of the VM this can make quite a difference and if the task(s) are mainly CPU bound and can be split appropriately you will often be better off running two or more VMs (assuming you have the RAM to support this) instead of giving one VM multiple vCPUs.

The use pattern you state (constant compile cycles) will be impacted by these performance hits, but I expect you will lose a lot less time this way than you would with the inconvenience of dual-booting. If your Windows use is much less CPU and I/O intensive, i.e. just using office apps and not heavy DB work or gaming or the like, you might want to consider running Linux as the host and Windows in the VM.