Best Practice: vCPUs per physical core

A single physical CPU can be utilized as many vCPUs. You rarely run out of CPU resources in virtualization solutions. RAM and storage are always the limiting factors...

Remember, in VMware, CPU utilization is represented in MHz used, not cores... Unless you're pegging all of your virtual CPUs at 100% ALL OF THE TIME, I don't think your vendor is correct.

Let's look at the following cluster of systems...

  • 9 ESXi hosts.
  • 160 virtual machines
  • 104 physical CPU cores across the cluster.
  • The average virtual machine profile is: 4 vCPU and 4GB to 18GB RAM.
  • CPU can safely be oversubscribed... but remember, it can also be limited, reserved and prioritized at the VM level.

enter image description here enter image description here

from another active cluster - 3 hosts 42 virtual machines enter image description here


To expand on ewwhite's write-up, unless you have applications which can explicitly take advantage of multiple vCPUs, or multiple cores per vCPU there is absolutely zero benefit in allocating multiple vCPUs/cores to a VM. In fact, more often than not you will actually end up with lower performance as opposed to running on a single vCPU that has one core assigned to it, in part because of the scheduling overhead required to run multiple vCPUs.

FWIW, in a VDI setting the often cited number is 5 vCPUs per physical core. Of course that's taking office work desktops into account. If your VMs are really busy with compiling code all the time you may not be able to fit 5 vCPUs per physical core.

The reason why so many people say that "it depends" is because it really does. Look at your CPU Ready values and then decide whether you can put more CPU load on a particular system. CPU Ready is a measurement of the vCPU being ready to execute a command but it has to wait for physical CPU time to become available.

In your case, if you are compiling large programs, it's entirely possible that your VMs will actually need a lot of CPU time. As ewwhite noted, normally virtualization tends to be disk I/O and RAM constrained rather than CPU constrained though.


The underlying problem is basically the same as with process-scheduling on a physical system. As long as the system load is below the number of cores (or even logical processors, in case of HyperThreading) all is well and the processors can handle the load.

So as long as the concurrent load on all used vCPUs does not exceed the load that can be handled by your physical cores all is well.

For your demands only compiling is a CPU-intesive work, which is only needed from time to time. For the compiler-VMs we allocat as many CPUs as available. So if there is a need to compile, it will be done as fast as possible (if your compiler supports paralell compiling).

This might not be true for a compiler-VM that is under constant load (e.g. if you provide a internet-service to make compiles and that is constantly being used).


One rule of thumb i've seen (possibly in VMware's documentation) is not to allocate more cores to a VM than physically exist on the host, because that would cause multiple vCores to be emulated on a single core, adding unnecessary overhead.