Advantages/Disadvantages for using a single volume / drive letter for Windows Server installs

Our virtual environment is cut up as a VM for each job role (DNS, DC, System Center CM, File Server, Log Aggregation, IIS, etc). If disk performance isn't critical is it really necessary to still logically separate the Windows OS files from your application files/data/logs? As a previous standard on Windows 2000/2003 and NT4 (gasp!) our shop would cut up the work load amongst a C: and D: drive, etc even if they were served by the same underlying disk array.

Assuming you don't need separate disk arrays for performance such as SQL Server scenarios, is this separation still optimal or necessary? What are the advantages/disadvantages to splitting your OS+Data across different volumes these days?

One advantage I can think of in favor of multiple volumes is you can run chkdsk against a non-system drive interactively w/o requiring a reboot (this has saved me some downtime in the past).

Thanks for your help - we have a pending build-out of several new utility servers and don't want to over-engineer.


If you're just asking if you should have a "system" partition and a separate "application" partition, I don't see the value in it, especially now.

It used to be convention to do this for UNIX type systems because if the system volume overflowed with data, you could have situations that render it unbootable or unusable, so the separation of partitions was kind of a pseudo-physical way of preventing, say, unattended logfiles from filling up your boot partition.

Today you shouldn't see much benefit from it if you're properly maintaining systems. You could actually have issues now that updates are huge and system upgrades are so large; what was once okay in a 10 gig Windows install partition is now tiny. I've also seen issues because of the way Windows transfers files over the network and as temporary downloads where it fills the system partition and can't copy things over where they belong later, and it increased fragmentation problems on the filesystem horrendously.

If you're virtualizing this system then you won't even argue that you're getting better performance since the virtual machine is abstracted away from the physical disk layer, regardless of how many partitions/drives the VM thinks it has.

If your application is time sensitive to the point where you can't take the system down for a disk check (hopefully you'd rarely ever need it), you should probably have plans in place that you have failover support, maintenance windows, and in the case of the VM, you could probably have a way to keep the VM running while diagnosing an issue with a snapshot or copy in a sandbox if need be. If the operation is critical, you should already have plans in place to keep the service available if the machine were to fail. That builds in the ability to fix the virtual box without interrupting service. Otherwise, your users will have to live with a period of downtime.

Also, Windows is finally gaining more flexibility (and Linux already had this) in creating on-the-fly volume management that can grow and shrink drives and combine them into larger volumes (like Linux LVM support). Windows is slowly moving away from the drive-centric model and more into the volume management model on servers.

The services you mention already have some redundancy built in if you're using Windows DC's, so time for a chkdsk should not be an issue for many of them.

Overall, unless you have a direct need to create a separate volumes as drive letters I'd create one large drive and leave it at that. It's simpler, it's more flexible down the road, and it's overall a smaller PITA to deal with.


I don't think it was ever a standard, although it was a widely implemented convention. I never saw the value in doing that and still don't. I understand the reasoning behind wanting to separate your data from your OS for backup/recovery reasons but separating your OS from your applications and data for performance reasons by creating individual logical volumes on the same physical disk susbsystem is an errant endeavor. If anyting, you're going to induce more performance load on the disk I/O by doing that as your OS and applications are contending for the same underlying physical disk(s). If you need to separate disk I/O intensive applications from the OS (such as SQL or Exchange) you need to do it using separate physical disks or disk arrays.


Certain file system operations may be easier as you are probably already aware (chkdsk). For example backups, depending on how your backup system works.

Also, planning may be simpler. If your OS' disk requirements are constant, but your applications requirements vary, you may have more flexibility when deploying VM's if you simply attach an application disk to a templated base VM.

I'm sure there are other advantages, and disadvantages, but relevancy will depend on how your organizations operates.


This practice stems from the venerable time when in UNIX (system V and prior) you could run out of inodes on the boot volume and be able to render a system unbootable. In modern UNIXes and Windows, this is no longer the case. Windows admins simply mimicked the partitioning scheme. Windows admins I know used to have the misguided belief (based on the old practice I think) that separating data from OS meant that you would somehow insulate the data from corruption should the boot volume go bad. in any case the only reason to separate the data is portability and expandability. It's much easier to expand a separate partition than it is to expand the boot volume.


While there is a performance gain by running OS and data on separate drives, there is a surprisingly common misconception that this also applies to separate partitions for the OS and data (or applications, or whatever). This couldn't be more wrong.

Multiple partitions on a single physical drive or array cause a performance drop. The reason is quite simple. The biggest performance hit results from the physical movement of the head assembly. The drive can't even begin to read or write until the heads have arrived at the correct location and stopped moving laterally. When running multiple partitions the heads need to move around much more than they would for a single partition. The average distance traveled is also much greater.

Back when drives were quite small it actually made sense to separate the OS and data, if only to ensure there was enough space for each. With modern drive capacities this makes less sense unless there is a more compelling reason that just separating them for the sake of doing so. Of course there are plenty of cases where the separation does make sense, provided you use physically separate drives or arrays. e.g. A high demand database server may well require dedicated drives in order to be able to cope.