Running 100 virtual machines on a single VMWare host server

I've been using VMWare for many years, running dozens of production servers with very few issues. But I never tried hosting more than 20 VMs on a single physical host. Here is the idea:

  1. A stripped down version of Windows XP can live with 512MB of RAM and 4GB disk space.
  2. $5,000 gets me an 8-core server class machine with 64GB of RAM and four SAS mirrors.
  3. Since 100 above mentioned VMs fit into this server, my hardware cost is only $50 per VM which is super nice (cheaper than renting VMs at GoDaddy or any other hosting shops).

I'd like to see if anybody is able to achieve this kind of scalability with VMWare? I've done a few tests and bumped into a weird issue. The VM performance starts degrading dramatically once you start up 20 VMs. At the same time, the host server does not show any resource bottlenecks (the disks are 99% idle, CPU utlization is under 15% and there is plenty of free RAM).

I'll appreciate if you can share your success stories around scaling VMWare or any other virtualization technology!


Yes you can. Even for some Windows 2003 workloads as little as 384MiB suffices, so 512MiB is a pretty good estimation, be it a little high. RAM should not be a problem, neither should CPU.

A 100 VMs is a bit steep, but it is doable, especially if the VMs are not going to be very busy. We easily run 60 servers (Windows 2003 and RHEL) on a single ESX server.

Assuming you are talking about VMware ESX, you should also know that is able to overcommit memory. VMs hardly ever use their full appointed memory ration, so ESX can commit more than the available amount of RAM to VMs and run more VMs than it actually 'officially' has RAM for.

Most likely your bottlenech will not be CPU or RAM, but IO. VMware boasts huge amounts of IOPS in their marketing, but when push comes to shove, SCSI reservation conflicts and limited bandwidth will stop you dead way before you'll come close to the IOPS VMware brags about.

Anyway, we are not experiencing the 20 VM performance degradation. What version of ESX are you using?


One major problem with a large environment like that would be disaster prevention and data protection. If the server dies, then 100 VMs die with it.

You need to plan for some sort of failover of the VMs, and to plan for some sort of "extra-VM" management that will protect your VMs in case of failure. Of course, this sort of redundancy means increased cost - which is probably why many times such an outlay is not approved until after its benefits have been seen in practice (by its absence).

Remember, too, that the VM host is only one of several single point-of-failures:

  • Network - what if the VM host's networking card goes down?
  • Memory - what if a chunk of the VM host's memory goes bad?
  • CPU - if a CPU core dies, then what happens to the VMs?
  • Power - is there only one - or two - power cables?
  • Management port - suppose you can't get to the VM's host management?

This is just a few: a massive VM infrastructure requires careful attention to prevention of data loss and prevention of VM loss.


No statement on the viability of this in production, but there is a very interesting NetApp demo where they provision 5440 XP desktops on 32 ESX hosts (that's 170 per host) in about 30 minutes using very little disk space due to deduplication against the common VM images

http://www.youtube.com/watch?v=ekoiJX8ye38

My guess is your limitations are coming from the disk subsystem. You seem to have accounted for the memory and CPU usage accordingly.


Never done it - but I promise you'll spend much more than on storage to get enough IOPs to support that many VM's than you will on the server hardware. You'll need alot IOPs if all 100 of those are active at the same time. Not to sound negative but have you also considered you're putting a lot of eggs in one basket(sounds like you're after single server solution?)