Why have multiple partitions on a windows server?
With my IT outfit, we have templates to deploy servers with a dinky C drive/partition (10GB) and a larger D drive/partition. Why do this? Windows (at least until recently and at that minimally) has no real use of dynamic mount points in general server deployments.
Edit
So with many of the comments below a synopsis:
- It's faster to recover a smaller partition. This includes a corruption of NTFS,which would be kept to a paticular partition instead of messing up the enitre system.
- You get some protection from runnaway processes. This includes the ability to set quotas.
- Provides some cost savings for Raid configuration
- A religious hold over from the days before virtualization, raids, and high bandwidth networks.
Aside from #3 ( which, I think, is an argument against partitions), I still see no reason to have separate partitions. If you want to protect your data, wouldn't you just put it on another set of real or virtual disks or otherwise map to a shared resource somwhere else (NAS, SAN, whatever)?
To stop data filling up your operating system volume and crashing the server.
File servers benefit from separate volumes if you use quotas, as these are usually set per volume. (e.g. Put your user's home directories on one volume, profiles on another, company data on another etc.)
p.s. 10Gb sounds too small for a system volume. After several years of windows updates and service packs, that will soon fill up.
Restoring from backup becomes easier when program/data files are separated from the OS installation. I like to give at least 25GB to the OS partition, but the point remains the same.
Typically I don't find an advantage to making partitions.
Applications (Microsoft and others) are notorious for demanding space on %SystemDrive% even if they allow you to choose a destination directory. With the inability to have the Windows Update Automated Updates service choose not to save backups of patched files, the size of "$Uninstall$" directories under %SystemRoot% grows and grows. Having an artifically constrained %SystemDrive% has been nothing but make-work for me.
I typically put shared directories and data under a single root-level subdirectory. That satisfies my needs to keep applications and data apart.
Having said all this, generally this is a "religious" issue and I don't argue with people about it. Do what you want with your servers. Not having "data" partitions has served me well.
(Now, having separate physical volumes / spindles... that's another story.)
Part of the reason that we do this is that if you have some sort of runaway proccess that fills up the drive, Windows doesn't crash to the ground when the disk run out of space.
The second reason we do this is to allow for different sized drives/different raid levels for our OS and data partitions. For example we would get (i'm rounding numbers and pulling them outta thin air here) 2x100GB SAS drive for an OS Mirror partition, and then 6x700GB SAS drives for a RAID 10 data partition. doing that could easily save you $1000 on the cost of the system at the end of the day.
The third reason is actually quite simple, whoever built the server with the dell CD wasn't paying attention and by default it creates a 10GB OS drive (20 on newer releases i believe).
Now as Evan has said this is really a personal preference that borders on "religious" belief. Honestly with the size of todays drives, either way will work fine. Do what you are comfortable with ... or what your corporate standards dictate.
EDIT (based on the original asker bringing up virtulization):.
The thought of virtulization brings up an interesting topic. As Evan pointed out, most of what I had to say was talking about different RAID containter. However, in my VMWare enviroement i have a base template of 20 GB. Now the interesting part comes here, all of my servers are hosted on a SAN and I have two volumes presented.
the 20 GB drive that is part of my template and
a variable data size Data drive that I attach per the requirements of the systems.
90% of the time these two disks are on the same RAID set, but are two different "physical" drives to the machine. As usual virtualization brings a layer of obscurity to the "standard" IT thought process.
I don't have an exact answer to your question, but I do have several anecdotes that you might find useful in designing your drive/partition setup.
(1) The corrupted NTFS
I had a server with two partitions, one for OS and one for data. At some point over the years, something went wrong with the data partition, and a single file nested about 6 levels deep became impossible to delete or rename. In the end, the only solution was to wipe the partition and reload the data back on. Obviously, it would have been much more painful without partitions.
(2) The full data partition
The same server as above, at another point in it's life, managed to end up with a completely full data partition while there were dozens of GB available on the OS partition. As a stop-gap measure, I used a junction point to temporarily store data on the OS partition until the new server arrived. It was ugly, but it worked. Avoiding partitions would have meant avoiding ugly fixes.
(3) The Server 2008 UAC
On a newer server, I discovered that you may have trouble administering any drive except the C: drive, unless you are the local Administrator or Domain Administrator. Being in the Administrators group is not sufficient. This is due to an oddity with UAC, which I have disabled for now.
(4) The Volume Shadow Copy
Shadow Copy (aka Previous Versions) is toggled on/off on a per-partition basis. If you don't want to waste space storing previous versions for a particular data set, partitions are your best ally.
My preferred course of action is to completely separate OS and Data by having a separate RAID 1 array just for the operating system. This allows a great deal of flexibility. For example, I could upgrade all the harddrives used for data storage without having to change the OS installation at all.