Best Practices for virtualizing servers onto the SAN?

Your plan is not nuts. As usual, there's more than a few ways to attack this based on what you're trying to achieve and how to protect your data.

First up, you can present a raw LUN to a VM using a "Raw Device Mapping". To do this:

  • Present the LUN to the ESXi host (or host group, if you are going to use clustering/HA)
  • Add a disk to your VM, select Raw Device Mapping, point at the LUN
  • Rescan the SCSI bus inside the VM
  • fdisk, mount and add to fstab, just like a normal disk.

Upside: fast to set up, fast to use, easy, can represent the disk to physical host if you find yourself needing to V2P down the track

Downside: you may lose some VMware-based snapshot/rollback options, depending on if you use physical or virtual compatibility mode

An alternate option is to create VMFS on the LUN to create a datastore, then add a VMDK disk to the VM living on that datastore.

  • Upside: it's Storage vMotion-friendly if you ever buy a license to use it. This allows for hot migration of VMDK disks between LUN's and even SAN's.

In both cases, you're in a similar risk position should VMware or your VM eat the filesystem during a failure; one is not drastically better than the other although what recovery options will be available will be quite different.

I don't deploy RDM's unless I have to; I've found they don't buy me much flexibility as a VMDK (and I've been bitten by bugs that made them impractical when performing other storage operations (since fixed - see RDM section in that link))


As for your VM, your best bet for flexibility is to store your fileserver's boot disk as a VMDK on the SAN so that you can have other hosts boot it in the case of a host failure. Using VMware's HA functionality, booting your VM on another host is automatic (the VM will boot on the second host as if the power had been pulled; expect to perform the usual fsck's and magic to bring it up as in the case of a normal server). Note, HA is a licensed feature.

To mitigate against a VM failure, you can build a light clone of your fileserver, containing the bare minimum required to boot and have SAMBA start in a configured state and store this on each host's local disk, awaiting you to add the data drive from the failed VM and power it on.

This may or may not buy you extra options in the case of a SAN failure; best case scenario, your data storage will require a fsck or other repair, but at least you don't have to fix, rebuild or configure the VM on top. Worst case, you've lost the data and need to go back to tape... but you were already in that state anyway.


I'd stick with the vmdk images, just incase you move to using vmotion in the future, you never know you may get a budget for it.

If your machines aren't clustered, then as far as i'm concerned the best way to manage them is to try and spread the load as evenly as you can. I have 3 non clustered 2950's where the load from the most critical vms is as much as possible 1/3 on each. The theory being I'm unlikely to loose more than one box at once, so at least 2/3 will be able to continue operating unaffected.

From a power point of view it would probably more efficient to load up the machines to as near as 100% as you can and have other machines powered off, but it seems like putting all your eggs in one basket to me.

I wouldn't call myself an expert at this, its just what I do.


Hey Matt. There are lots of ways to slice up a solution when you use a virtualization solution. First off there have been lots of benchmarks showing Raw LUN (RDM) versus VMDK performance and the difference is typically shown to be negligible. Some things to be aware of with RDMs: Only certain clustering situations require using RDMs(MS clustering). RDM's have a 2TB limit but LVM can be used to work around this limit. RDM's are 'harder' to keep track of than giving a LUN to ESXi to use for VMFS and putting vmdk's on it. VMDKs (as mentioned) have some nice benefits: svMotion, Snapshots(can't snapshot a pRDM).

If running Free ESXi, here is how I might go about your situation. First off all data is in vmdk files on VMFS LUNS. Setup 2 VM's and use Heartbeat for failover of IP and Services. Heartbeat will shift the service IP over, and can handle scripting to unmount / mount the data LUN where appropriate. You could even script some VMware Remote CLI to ensure the 'down' VM gets powered off for fencing. With heartbeat directly coordinating between the systems risk of both accessing the data lun / running the same services should be extremely low. The key here is making sure mounting /unmounting of the data LUN and startup/shutdown of services is handled by Heartbeat, not the normal init mechanisms.

An alternative failover might be accomplished via monitoring system. When it detects the down host it could use VMware Remote CLI to issue a power off(to be safe) and then power on of the backup vm. In this situation failing back is fairly manual.

In my "tiny" environment I've not seen a VMDK get corrupted. What I've also come to realize is that if you have more than 2 ESX(i) hosts or a dozen VM's, you'll want to get vCenter to help keep track of everything. Some of the Essential/Plus packages are not too costly considering the benefits.


Matt, you know I don't use VMware but I have always used "RAW" with Xen. With just a few VMs that are lightly loaded I doubt you will see much of a performance difference. But when you start getting into more and more guests if all those guests are on the same file-system you will end up with queue depth issues. This is especially true of NFS backed storage. Its not so much that NFS server has the issues but most NFS client implementations suck.

I don't know of a good way to synchronize the vmdks if you are looking for redundancy (san failure). But if you use block devices you still have the possibility of using DRBD to replicate just the vms you want/need replicated.