Is there a reason(s) I should not create a datastore to house a single VM?

Well, it works in their case because for a normal VM snapshot, you need to keep extra space free on the datastore to house the data difference between the snapshot and the base; with their tools, it has the storage handle that instead. Not sure if it's snapshotting the whole LUN in the storage when it does this - if that is the case, then that would explain the benefit to splitting the VMs to different datastores.

There's also something to be said for the traditional argument for fewer VMs per data store: SCSI locking. With too many VMs on one store, their IO locks will step on each other.

The downside, of course, being the pain of managing all of this. Add a new disk for the VM? Gotta storage vMotion stuff around to make room. Expand a disk? Same thing. Migrate to new storage? You're provisioning a whole lot of new LUNs. It's certainly a whole lot simpler from a management perspective to use big bucket datastores.


People are currently running VM densities of 20-40 VMs per datastore and not having IO problems, to address one of your concerens. I play ultra cautious, I feel, and keep ~10-15 and have no IO latency/CPU ready problems. VMware is moving towards larger datastores to make fore easier management so really this is working in direct opposition to where the virtual environment is going. With vSphere5 you can have very large datastores so snapshots filling a DS shouldn't concern you because 1) You have more space available and more time before an out of space condition occurs 2) With SDRS enabled you can either have the VM automatically migrate to the next best DS based on IO/space projected 24 hours out.

Also snapshots shouldn't be kept open for long periods of time and are not a valid backup/recovery option. How many times have you had a snapshot fill a datastore and have 'bad things' happen? You can chip away at this possible blowup, if you don't use SDRS, and not use thin prov on a disk you plan to keep snaps open on for extended time.


If you plan to run more VMs than the number of LUNs per host shown in the Configuration Maximums doc for your version of VMware you wouldn't want to have one LUN per VM. For vSphere 4.1 that is 256. For ESX 3.5 it is only 62 (as I learnt yesterday when I couldn't see some new LUNs I was trying to attach).