Is there a reason to keep Windows' primary partition / drive C: small?
Solution 1:
In my jobs almost two decades ago, IT experts would keep the size of Windows' main partition (C drive) extremely small compared to the other partitions. They would argued this runs PC at optimum speed without slowing down. [...] My question is this practice still good?
In general: No.
In older Windows versions, there were performance problems with large drives (more accurately: with large filesystems), mainly because the FAT filesystem used by Windows did not support large filesystems well. However, all modern Windows installations use NTFS instead, which solved these problems. See for example Does NTFS performance degrade significantly in volumes larger than five or six TB?, which explains that even terabyte-sized partitions are not usually a problem.
Nowadays, there is generally no reason not to use a single, large C: partition. Microsoft's own installer defaults to creating a single, large C: drive. If there were good reasons to create a separate data partition, the installer would offer it - why should Microsoft let you install Windows in a way that creates problems?
The main reason against multiple drives is that it increases complexity - which is always bad in IT. It creates new problems, such as:
- you need to decide which files to put onto which drive (and change settings appropriately, click stuff in installers etc.)
- some (badly written) software may not like not being put onto a drive different than C:
- you can end up with too little free space on one partition, while the other still has free space, which can be difficult to fix
There are some special cases where multiple partitions make still make sense:
- If you want to dual-boot, you (usually) need separate partitions for each OS install (but still only one partition per install).
- If you have more than one drive (particularly drives with different characteristics, such as SSD & HD), you may want to pick and choose what goes where - in that case it can make sense to e.g. put drive C: on the SSD and D: on the HD.
To address some arguments often raised in favor of small/separate partitions:
- small partitions are easier to backup
You should really back up all your data anyway, to splitting it across partitions does not really help. Also, if you really need to do it, all backup software I know lets you selectively back up a part of a partition.
- if one partition is damaged, the other partition may still be ok
While this is theoretically true, there is no guarantee damage will nicely limit itself to one partition (and it's even harder to check to make sure of this in case of problems), so this provides only limited guarantee. Plus, if you have good, redundant backups, the added safety is usually to small to be worth the bother. And if you don't have backups, you have much bigger problems...
- if you put all user data on a data partition, you can wipe and reinstall / not backup the OS partition because there is no user data there
While this may be true in theory, in practice many programs will write settings and other important data to drive C: (because they are unfortunately hardcoded to do that, or because you accidentally forgot to change their settings). Therefore IMHO it is very risky to rely on this. Plus, you need good backups anyway (see above), so after reinstallation you can restore the backups, which will give you the same result (just more safely). Modern Windows versions already keep user data in a separate directory (user profile directory), so selectively restoring is possible.
See also Will you install software on the same partition as Windows system? for more information.
Solution 2:
The historical reason for this practice is most likely rooted in the performance properties of rotating magnetic HDDs. The area on spinning disks with the highest sequential access speed are the outermost sectors (near the start of the drive).
If you use the whole drive for your operating system, sooner or later (through updates etc) your OS files would be spread out all over the disk surface. So, to make sure that the OS files physically stay in the fastest disk area, you would create a small system partition at the beginning of the drive, and spread the rest of the drive in as many data partitions as you like.
Seek latency also partly depends on how far the heads have to move, so keeping all the small files somewhat near each other also has an advantage on rotational drives.
This practice has lost all its reason with the advent of SSD storage.
Solution 3:
Is there a reason to keep Windows' primary partition / drive C: small?
Here are a few reasons to do that:
- All system files and the OS itself are on the primary partition. It is better to keep those files seperated from other software, personal data and files, simply because constantly meddling in the bootable partition and mixing your files there might occasionally lead to mistakes, like deleting system files or folders by accident. Organization is important. This is why the size of the primary partition is low -- to discourage users from dumping all their data in there.
- Backups - it's a lot easier, faster, and effective to backup and recover a smaller partition than a bigger one, depending on the purpose of the system. As noted by @computercarguy in the comments, it is better to backup specific folders and files, than backing up a whole partition, unless needed.
- It could improve performance, however, in a hardly noticeable manner. On NTFS filesystems, there are the so-called Master File Tables on each partition, and it contains meta-data about all the files on the partition:
Describes all files on the volume, including file names, timestamps, stream names, and lists of cluster numbers where data streams reside, indexes, security identifiers, and file attributes like "read only", "compressed", "encrypted", etc.
This might introduce an advantage, though unnoticeable, thus this could be ignored, as it really doesn't make a difference. @WooShell's answer is more related to the performance issue, even though it still is neglectable.
Another thing to note, is that in case of having an SSD + HDD, it is way better to store your OS on the SSD and all your personal files/data on the HDD. You most likely wouldn't need the performance boost from having an SSD for most of your personal files and consumer-grade solid state drives usually do not have much space on them, so you'd rather not try to fill it up with personal files.
Can someone explain why this practice is done and is it still valid?
Described some of the reasons why it is done. And yes, it is still valid, though not a good practice anymore as it seems. The most notable downsides are that end-users will have to keep track on where applications suggest to install their files and change that location (possible during almost any software installation, especially if expert/advanced install is an option) so the bootable partition doesn't fill up, as the OS does need to update at times, and another downside is that when copying files from one partition to another, it actually needs to copy them, while if they were in the same partition, it just updates the MFT and the meta-data, does not need to write the whole files again.
Some of these unfortunately can introduce more problems:
- It does increase the complexity of the structure, which makes it harder and more time-consuming to manage.
- Some applications still write files/meta-data to the system partition (file associations, context menus, etc..), even if installed in another partition, thus this makes it harder to backup and might introduce failures in syncing between partitions. (thanks to @Bob's comment)
To avoid the problem you're having, you need to:
- Always try to install applications on the other partitions (change the default installation location).
- Make sure to install only important software in your bootable partition. Other not-so-needed and unimportant software should be kept outside of it.
I am also not saying that having multiple partitions with a small primary one is the best idea. It all depends on the purpose of the system, and although it introduce a better way to organize your files, it comes with its downsides, which on Windows systems in the current days, are more than the pros.
Note: And as you've mentioned yourself, it does keep the data that is in separate partitions safe in case of a failure of the bootable partition occurs.
Solution 4:
Short answer: Not any more.
In my experience (20+ years of IT adminship work), the primary reason for this practice (others are listed below) is that users basically didn't trust Windows with their data and hard drive space.
Windows has long been notoriously bad at staying stable over time, cleaning after itself, keeping the system partition healthy and providing convenient access to user data on it. So users preferred to reject the filesystem hierarchy that Windows provided and roll their own outside of it. The system partition also acted as a ghetto to deny Windows the means to wreak havoc outside of its confines.
- There are lots of products, including those from Microsoft, that don't uninstall cleanly and/or cause compatibility and stability issues (the most prominent manifestation is leftover files and registry entries all around and DLL Hell in all of its incarnations). Many files created by the OS are not cleaned up afterwards (logs, Windows updates etc), leading to the OS taking up more and more space as time goes. In Windows 95 and even XP era, advice went as far as suggesting a clean reinstall of the OS once in a while. Reinstalling the OS required an ability to guarantee wiping the OS and its partition (to also clean up any bogus data in the filesystem) -- impossible without multiple partitions. And splitting the drive without losing data is only possible with specialized programs (which may have their own nasty surprises like bailing out and leaving data in an unusable state upon encountering a bad sector).
Various "clean up" programs alleviated the problem, but, their logic being based on reverse engineering and observed behaviour, were even more likely to cause a major malfunction that would force a reinstall (e.g. the
RegClean
utility by MS itself was called off after Office 2007 release that broke assumptions about the registry that it was based on). The fact that many programs saved their data into arbitrary places made separating user and OS data even harder, making users install programs outside of the OS hierarchy as well.- Microsoft tried a number of ways to enhance stability, with varying degrees of success (shared DLLs, Windows File Protection and its successor TrustedInstaller, Side-by-side subsystem, a separate repository for .NET modules with storage structure that prevents version and vendor conflicts). The latest versions of Windows Installer even have rudimentary dependency checking (probably the last major package manager in general use to include that feature).
- With regard to 3rd-party software compliance to best practices, they maneuvered between maintaining compatibility with sloppily-written but sufficiently used software (otherwise, its users would not upgrade to a new Windows version) -- which lead to a mind-bogging amount of kludges and workarounds in the OS, including undocumented API behavior, live patching of 3rd-party programs to fix bugs in them and a few levels of registry and filesystem virtualization -- and between forcing 3rd-party vendors into compliance with measures like a certification logo program and a driver signing program (made compulsory starting with Vista).
- User data being buried under a long path under the user's profile made it inconvenient to browse for and specify paths to it. The paths also used long names, had spaces (a bane of command shells everywhere) and national characters (a major problem for programming languages except very recent ones that have comprehensive Unicode support) and were locale-specific (!) and unobtainable without winapi access (!!) (killing any internationalization efforts in scripts), all of which didn't help matters, either.
So having your data in the root dir of a separate drive was seen as a more convenient data structure than what Windows provided.- This was only fixed in very recent Windows releases. Paths themselves were fixed in Vista, compacting long names, eliminating spaces and localized names. The browsing problem was fixed in Win7 that provided Start Menu entries for both the root of the user profile and most other directories under it and things like persistent "Favorite" folders in file selection dialogs, with sensible defaults like
Downloads
, to save the need to browse for them each time.
- This was only fixed in very recent Windows releases. Paths themselves were fixed in Vista, compacting long names, eliminating spaces and localized names. The browsing problem was fixed in Win7 that provided Start Menu entries for both the root of the user profile and most other directories under it and things like persistent "Favorite" folders in file selection dialogs, with sensible defaults like
- All in all, MS' efforts bore fruit in the end. Roughtly since Win7, the OS, stock and 3rd-party software, including cleanup utilities, are stable and well-behaved enough, and HDDs large enough, for the OS to not require reinstallation for the entirety of a typical workstation's life. And the stock hierarchy is usable and accessible enough to actually accept and use it in day-to-day practice.
Secondary reasons are:
-
Early software (filesystem and partitioning support in BIOS and OSes) were lagging behind hard drives in supporting large volumes of data, necessitating splitting a hard drive into parts to be able to use its full capacity.
- This was primarily an issue in DOS and Windows 95 times. With the advent of FAT32 (Windows 98) and NTFS (Windows NT 3.1), the problem was largely solved for the time being.
- The 2TB barrier that emerged recently was fixed by the recent generation of filesystems (ext4 and recent versions of NTFS), GPT and 4k disks.
-
Various attempts to optimize performance. Rotational hard drives are slightly (about 1.5 times) faster at reading data from outer tracks (which map to the starting sectors) than the inner, suggesting locating frequently-accessed files like OS libraries and pagefile near the start of the disk.
- Since user data is also accessed very often and head repositioning has an even larger impact on performance, outside of very specific workloads, the improvement in real-life use is marginal at best.
-
Multiple physical disks. This is a non-typical setup for a workstation since a modern HDD is often sufficiently large by itself and laptops don't even have space for a 2nd HDD. Most if not all stations I've seen with this setup are desktops that (re)use older HDDs that are still operational and add up to the necessary size -- otherwise, either a RAID should be used, or one of the drives should hold backups and not be in regular use.
- This is probably the sole case where one gets a real gain from splitting system and data into separate volumes: since they are physically on different hardware, they can be accessed in parallel (unless it's two PATA drives on the same cable) and there's no performance hit on head repositioning when switching between them.
- To reuse the Windows directory structure, I typicaly move
C:\Users
to the data drive. Moving just a single profile or even justDocuments
,Downloads
andDesktop
proved to be inferior 'cuz other parts of the profile andPublic
can also grow uncontrollably (see the "separate configuration and data" setup below).
- To reuse the Windows directory structure, I typicaly move
- Though the disks can be consolidated into a spanned volume, I don't use or recommend this because Dynamic Volumes are a proprietary technology that 3rd-party tools have trouble working with and because if any of the drives fails, the entire volume is lost.
- This is probably the sole case where one gets a real gain from splitting system and data into separate volumes: since they are physically on different hardware, they can be accessed in parallel (unless it's two PATA drives on the same cable) and there's no performance hit on head repositioning when switching between them.
-
An M.2 SSD + HDD.
- In this case, I rather recommend using SSD solely as a cache: this way, you get the benefit of an SSD for your entire array of data rather than just some arbitrary part of it, and what is accelerated is determined automagically by what you actually access in practice.
- In any case, this setup in a laptop is inferior to just a single SSD 'cuz HDDs are also intolerant to external shock and vibration which are very real occurrences for laptops.
- Dual boot scenarios. Generally, two OSes can't coexist on a single partition. This is the only scenario that I know of that warrants multiple partitions on a workstation. And use cases for that are vanishingly rare nowadays anyway because every workstation is now powerful enough to run VMs.
-
On servers, there are a number of other valid scenarios -- but none of them applies to Super User's domain.
- E.g. one can separate persistent data (programs and configuration) from changing data (app data and logs) to prevent a runaway app from breaking the entire system. There are also various special needs (e.g. in an embedded system, persistent data often resides on a EEPROM while work data on a RAM drive). Linux's Filesystem Hierarchy Standard lends itself nicely to tweaks of this kind.