When deploying a single server on new hardware, do you virtualize it or not?

There are a few questions that I've found on ServerFault that hint around this topic, and while it may be somewhat opinion-based, I think it can fall into that "good subjective" category based on the below:

Constructive subjective questions:

* tend to have long, not short, answers
* have a constructive, fair, and impartial tone
* invite sharing experiences over opinions
* insist that opinion be backed up with facts and references
* are more than just mindless social fun

So that out of the way.


I'm helping out a fellow sysadmin that is replacing an older physical server running Windows 2003 and he's looking to not only replace the hardware but "upgrade" to 2012 R2 in the process.

In our discussions about his replacement hardware, we discussed the possibility of him installing ESXi and then making the 2012 "server" a VM and migrating the old apps/files/roles from the 2003 server to the VM instead of to a non-VM install on the new hardware.

He doesn't perceive any time in the next few years the need to move anything else to a VM or create additional VMs, so in the end this will either be new hardware running a normal install or new hardware running a single VM on ESXi.

My own experience would lean towards a VM still, there isn't a truly compelling reason to do so other than possibilities that may arise to create additional VMs. But there is the additional overhead and management aspect of the hypervisor now, albeit I have experienced better management capabilities and reporting capabilities with a VM.

So with the premise of hoping this can stay in the "good subjective" category to help others in the future, what experiences/facts/references/constructive answers do you have to help support either outcome (virtualizing or not a single "server")?


Solution 1:

In the general case, the advantage of putting a standalone server on a hypervisor is future-proofing. It makes future expansion or upgrades much easier, much faster, and as a result, cheaper. The primary drawback is additional complexity and cost (not necessarily financially, but from a man-hours and time perspective).

So, to come to a decision, I ask myself three questions (and usually prefer to put the server on a hypervisor, for what it's worth).

  1. How big is the added cost of the hypervisor?
    • Financially, it's usually minimal or non-existent.
      • Both VMware and Microsoft have licensing options that allow you to run a host and a single guest for free, and this is sufficient for most standalone servers, exceptions generally being servers that are especially resource-intensive.
    • From a management and resource standpoint, determining cost can be a bit trickier.
      • You basically double the cost of maintaining the system, because now you have two systems to monitor, manage and keep up-to-date with patches and updates (the guest OS and the host OS).
        • For most uses, this is not a big deal, as it's not terribly taxing to maintain one server, though for some especially small or especially technically challenged organizations, this can be a real concern.
      • You also add to the technical skills required. Now, instead of just needing someone who can download updates from Windows Update, you need someone who knows enough to manage and maintain the virtualization environment.
        • Again, not usually a problem, but sometimes, it's more than an organization can handle.

  2. How big is the benefit from ease-of upgrade or expansion?
    • This boils down to how likely future expansion is, because obviously, if they don't expand or upgrade their server assets, this benefit is zero.
      • If this is the type of organization that's just going to stuff the server in a corner and forget about it for 10 years until it needs to be replaced anyway, there's no point.
      • If they're likely to grow organizationally, or even just technically (by say adding new servers with different roles, instead of just having an all-in-one server), then this provides a fairly substantial benefit.

  3. What's the benefit now?
    • Virtualization bring benefits beyond future-proofing, and in some use-cases, they can be substantial.
      • The most obvious one is the ability to create snapshots and trivial-to-restore backups before doing something on the system, so if it goes bad, you can revert in one click.
      • The ability to experiment with other VMs (and play the "what if" game) is another one I've seen management get excited about. For my money, though, the biggest benefit is the added portability you get from running a production server on a hypervisor. If something goes really wrong and you get yourself into a disaster-recovery or restore-from-backups situation, it is almost infinitely easier to restore a disk image to a machine running the same hypervisor than trying to do a bare-metal restore.

Solution 2:

I think the operating system being virtualized is a big factor, along with performance requirements and potential for expansion/growth. Today's servers are often excessively powerful for the applications and operating systems we use. In my experience, most standard Windows systems can't make efficient use of the resources available in a modern dual-socket server. With Linux, I've leveraged some of the granular resource management tools (cgroups) and containers (LXC) to make better use of physical systems. But the market is definitely geared toward virtualization-optimized hardware.

That said, I've virtualized single-systems rather than bare-metal installs in a few situations. Common reasons are:

  • Licensing - The dwindling number of applications that license based on rigid core, socket or memory limits (without regard to the trends in modern computing). See: Disable CPU cores in bios?

  • Portability - Virtualizing a server abstracts the VM from the hardware. This makes platform changes less disruptive and allows the VM to reference standard virtualized devices/components. I've been able to keep decrepit (but critical) Windows 2000 systems on life-support using this approach.

  • Future expansion - I have a client now who has a Windows 2003 domain controller running on 2001-era hardware. I'm building a new single-host ESXi system for them which will house a 2012 R2 new domain controller for the interim. But more VMs will follow. In this configuration, I can offer reliable resource expansion without additional hardware costs.

The downsides of doing this with a single host/single VM is management. I'm coming from the VMware perspective, but in the past, ESXi was a bit friendlier to this arrangement. Today, the requirement of the vSphere Web Client and restricted access to basic features, makes running a single-host (and single-VM) solution less attractive.

Other considerations are crippled hardware monitoring and more complexity involved with common external peripherals (USB devices/tape drive/backups/UPS solutions). Today's hypervisors really want to be part of a larger management suite.