How much contention is too much in VMware?

Solution 1:

I can describe some of the experiences I've had in this area...

I don't believe that VMware does an adequate job of educating customers (or administrators) about best-practices, nor do they update former best-practices as their products evolve. This question is an example of how a core concept like vCPU allocation isn't fully understood. The best approach is to start small, with a single vCPU, until you determine that the VM requires more.

For the OP, the ESXi host server has two quad-core CPUs, yielding 8 physical cores.

The virtual machine layout being described is 15 total guests; 1 x 8 vCPU and 14 x 4 vCPU systems. That's way too overcommitted, especially with the existence of a single guest with 8 vCPUs. It makes no sense. If you need a VM that big, you likely need a bigger server.

Please try to right-size your virtual machines. I'm pretty certain most of them can live with 2 vCPU. Adding virtual CPUs does not make things run faster, so if that's a remedy to a performance problem, it's the wrong approach to take.

In most environments, RAM is the most constrained resource. But CPU can be a problem if there's too much contention. You have evidence of this. RAM can also be an issue if too much is allocated to individual VMs.

It's possible to monitor this. The metric you're looking for is "CPU Ready %". You can access this from the vSphere client by selecting a VM and going to Performance > Overview > CPU Graph.

  • Under 5% CPU Ready - You're fine.
  • 5-10% CPU Ready - Keep a close look at activity.
  • Over 10% CPU Ready - Not good.

Note the Yellow line in the graph below. enter image description here

Would you mind checking this on your problem virtual machines and reporting back?

Solution 2:

You state in the comments you have a dual quad-core ESXi host, and you're running one 8vCPU VM, and fourteen 4vCPU VMs.

If this was my environment, I would consider that to be grossly over-provisioned. I would at most put four to six 4vCPU guests on that hardware. (This is assuming that the VMs in question have load that requires them to have that high of a vCPU count.)

I'm assuming you don't know the golden rule... with VMware you should never assign a VM more cores than it needs. Reason? VMware uses somewhat strict co-scheduling that makes it hard for VMs to get CPU time unless there are as many cores available as the VM is assigned. Meaning, a 4vCPU VM cannot perform 1 unit of work unless there are 4 physical cores open at the same moment. In other words, it's architecturally better to have a 1vCPU VM with 90% CPU load, then to have a 2vCPU VM with 45% load per core.

So...ALWAYS create VMs with a minimum of vCPUs, and only add them when it's determined to be necessary.

For your situation, use Veeam to monitor CPU usage on your guests. Reduce vCPU count on as many as possible. I would be willing to bet that you could drop to 2vCPU on almost all your existing 4vCPU guests.

Granted, if all these VMs actually have the CPU load to require the vCPU count they have, then you simply need to buy additional hardware.