Why does emptying disk space speed up computers?

I have been looking at a bunch of videos and now understand a bit better how computers work. I better understand what RAM is, volatile and non-volatile memory, and the process of swapping. I also understand why increasing RAM speeds up a computer.

I don't understand why cleaning up disk space speeds up a computer. Does it? Why does it? Does it have to do with searching for available space to save things? Or with moving things around to make a long enough continuous space to save something? How much empty space on the hard disk should I leave free?


Here, I wrote a book by accident. Get some coffee first.

Why does emptying disk space speed up computers?

It doesn't, at least not on its own. This is a really common myth. The reason it is a common myth is because filling up your hard drive often happens at the same time as other things that traditionally could slow down your computer. SSD performance does tend to degrade as the drive fills, but this is a relatively new issue, unique to SSDs, and is not really noticeable for casual users. Generally, low free disk space is just a red herring.

For example, things like:

  • File fragmentation. File fragmentation is an issue††, but lack of free space, while definitely one of many contributing factors, is not the only cause of it. Some key points here:

    • The chances of a file being fragmented are not related to the amount of free space left on the drive. They are related to the size of the largest contiguous block of free space on the drive (e.g. "holes" of free space), which the amount of free space happens to put an upper bound on. They are also related to how the file system handles file allocation (more below). Consider: A drive that is 95% full with all free space in one single contiguous block has 0% chance of fragmenting a new file ††† (and the chance of fragmenting an appended file is independent of the free space). A drive that is 5% full but with data spread evenly over the drive has a very high chance of fragmentation.

    • Keep in mind that file fragmentation only affects performance when the fragmented files are being accessed. Consider: You have a nice, defragmented drive that still has lots of free "holes" in it. A common scenario. Everything is running smoothly. Eventually, though, you get to a point where there are no more large blocks of free space remaining. You download a huge movie, the file ends up being severely fragmented. This will not slow down your computer. All of your application files and such that were previously fine won't suddenly become fragmented. This may make the movie take longer to load (although typical movie bit rates are so low compared to hard drive read rates that it'll most likely be unnoticeable), and it may affect I/O-bound performance while the movie is loading, but other than that, nothing changes.

    • While file fragmentation is certainly an issue, often times the effects are mitigated by OS and hardware level buffering and caching. Delayed writes, read-ahead, strategies like the prefetcher in Windows, etc., all help reduce the effects of fragmentation. You generally don't actually experience significant impact until the fragmentation becomes severe (I'd even venture to say that as long as your swap file isn't fragmented, you'll probably never notice).

  • Search indexing is another example. Let's say you have automatic indexing turned on and an OS that doesn't handle this gracefully. As you save more and more indexable content to your computer (documents and such), indexing may take longer and longer and may start to have an effect on the perceived speed of your computer while it is running, both in I/O and CPU usage. This is not related to free space, it's related to the amount of indexable content you have. However, running out of free space goes hand in hand with storing more content, hence a false connection is drawn.

  • Antivirus software. Similar to the search indexing example. Let's say you have antivirus software set up to do background scanning of your drive. As you have more and more scannable content, the search takes more I/O and CPU resources, possibly interfering with your work. Again, this is related to the amount of scannable content you have. More content often equals less free space, but the lack of free space is not the cause.

  • Installed software. Let's say you have a lot of software installed that loads when your computer boots, thus slowing down start-up times. This slow down happens because lots of software is being loaded. However, installed software takes hard drive space. Therefore hard drive free space decreases at the same time that this happens, and again a false connection can be readily made.

  • Many other examples along those lines which, when taken together, appear to closely associate lack of free space with lower performance.

The above illustrate another reason that this is such a common myth: While lack of free space is not a direct cause of slow down, uninstalling various applications, removing indexed or scanned content, etc. sometimes (but not always; outside the scope of this answer) increases performance again for reasons unrelated to the amount of free space remaining. But this also naturally frees up hard drive space. Therefore, again, an apparent (but false) connection between "more free space" and "faster computer" can be made.

Consider: If you have a machine running slowly due to lots of installed software, etc., and you clone, exactly, your hard drive to a larger hard drive then expand your partitions to gain more free space, the machine won't magically speed up. The same software loads, the same files are still fragmented in the same ways, the same search indexer still runs, nothing changes despite having more free space.

Does it have to do with searching for a memory space where to save things?

No. It does not. There's two very important things worth noting here:

  1. Your hard drive doesn't search around to find places to put things. Your hard drive is stupid. It's nothing. It's a big block of addressed storage that blindly puts things where your OS tells it to and reads whatever is asked of it. Modern drives have sophisticated caching and buffering mechanisms designed around predicting what the OS is going to ask for based on the experience we've gained over time (some drives are even aware of the file system that is on them), but essentially, think of your drive as just a big dumb brick of storage with occasional bonus performance features.

  2. Your operating system does not search for places to put things, either. There is no "searching". Much effort has gone into solving this problem, as it is critical to file system performance. The way that data is actually organized on your drive is determined by your file system. For example, FAT32 (old DOS and Windows PCs), NTFS (later Windows), HFS+ (Mac), ext4 (some Linuxes), and many others. Even the concept of a "file" and a "directory" are merely products of typical file systems -- hard drives know not about the mysterious beasts called "files". Details are outside the scope of this answer. But essentially, all common file systems have ways of tracking where the available space is on a drive so that a search for free space is, under normal circumstances (i.e. file systems in good health), unnecessary. Examples:

    • NTFS has a master file table, which includes the special files $Bitmap, etc., and plenty of meta data describing the drive. Essentially it keeps track of where the next free blocks are, so that new files can be written directly to free blocks without having to scan the drive every time.

    • Another example, ext4 has what's called the "bitmap allocator", an improvement over ext2 and ext3 that basically helps it directly determine where free blocks are instead of scanning the list of free blocks. Ext4 also supports "delayed allocation", that is, buffering of data in RAM by the OS before writing it out to the drive in order to make better decisions about where to put it to reduce fragmentation.

    • Many other examples.

or with moving things around for making up a long enough continuous space for saving something?

No. This does not happen, at least not with any file system I'm aware of. Files just end up fragmented.

The process of "moving things around to make up a long enough contiguous space for saving something" is called defragmenting. This doesn't happen when files are written. This happens when you run your disk defragmenter. On newer Windows, at least, this happens automatically on a schedule, but it is never triggered by writing a file.

Being able to avoid moving things around like this is key to file system performance, and is why fragmentation happens and why defragmentation exists as a separate step.

How much empty space on the hard disk should I leave free?

This is a trickier question to answer, and this answer has already turned into a small book.

Rules of thumb:

  • For all types of drives:

    • Most importantly, leave enough free space for you to use your computer effectively. If you're running out of space to work, you'll want a bigger drive.
    • Many disk defragmentation tools require a minimum amount of free space (I think the one with Windows requires 15% worst case) to work in. They use this free space to temporarily hold fragmented files as other things are rearranged.
    • Leave space for other OS functions. For example, if your machine does not have a lot of physical RAM, and you have virtual memory enabled with a dynamically sized page file, you'll want to leave enough space for the page file's maximum size. Or if you have a laptop that you put into hibernation mode, you'll need enough free space for the hibernation state file. Things like that.
  • SSD-specific:

    • For optimum reliability (and to a lesser extent, performance) SSDs require some free space, which, without going into too much detail, they use for spreading data around the drive to avoid constantly writing to the same place (which wears them out). This concept of leaving free space is called over-provisioning. It's important, but in many SSDs, mandatory over-provisioned space already exists. That is, the drives often have a few dozen more GB than they report to the OS. Lower-end drives often require you to manually leave unpartitioned space, but for drives with mandatory OP, you do not need to leave any free space. An important thing to note here is that over-provisioned space is often only taken from unpartitioned space. So if your partition takes up your entire drive and you leave some free space on it, that doesn't always count. Many times, manual over-provisioning requires you to shrink your partition to be smaller than the size of the drive. Check your SSD's user manual for details. TRIM and garbage collection and such have effects as well but those are outside the scope of this answer.

Personally I usually grab a bigger drive when I have about 20-25% free space remaining. This isn't related to performance, it's just that when I get to that point, I expect that I'll probably be running out of space for data soon, and it's time to get a bigger drive.

More important than watching free space is making sure scheduled defragmentation is enabled where appropriate (not on SSDs), so that you never get to the point where it becomes dire enough to affect you. Equally important is avoiding misguided tweaks and letting your OS do its thing, e.g. don't disable the Windows prefetcher (except for SSDs), etc.


There's one last thing worth mentioning. One of the other answers here mentioned that SATA's half-duplex mode prevents reading and writing at the same time. While true, this is greatly oversimplified and is mostly unrelated to the performance issues being discussed here. What this means, simply, is that data can't be transferred in both directions on the wire at the same time. However, SATA has a fairly complex specification involving tiny maximum block sizes (about 8kB per block on the wire, I think), read and write operation queues, etc., and does not preclude writes to buffers happening while reads are in progress, interleaved operations, etc.

Any blocking that occurs would be due to competing for physical resources, usually mitigated by plenty of cache. The duplex mode of SATA is almost entirely irrelevant here.


"Slow down" is a broad term. Here I use it to refer to things that are either I/O-bound (e.g. if your computer is sitting there crunching numbers, the contents of the hard drive have no impact), or CPU-bound and competing with tangentially related things that have high CPU usage (e.g. antivirus software scanning tons of files).

†† SSDs are affected by fragmentation in that sequential access speeds are generally faster than random access, despite SSDs not facing the same limitations as a mechanical device (even then, lack of fragmentation does not guarantee sequential access, due to wear leveling, etc., as James Snell notes in comments). However, in virtually every general use scenario, this is a non-issue. Performance differences due to fragmentation on SSDs are typically negligible for things like loading applications, booting the computer, etc.

††† Assuming a sane file system that isn't fragmenting files on purpose.


In addition to Nathanial Meek's explanation for HDDs, there is a different scenario for SSDs.

SSDs are not sensitive to scattered data because the access time to any place on the SSD is the same. The typical SSD access time is 0.1ms versus a typical HDD access time of 10 to 15ms. It is, however, sensitive to data that is already written on the SSD

Unlike traditional HDDs that can overwrite existing data, a SSD needs completely empty space to write data. That is done by functions called Trim and Garbage Collection which purge data that was marked as deleted. Garbage Collection works best in combination with a certain amount of free space on the SSD. Usually 15% to 25% of free space is recommended.

If the garbage collection cannot complete it's job in time, then each write operation is preceded by a cleanup of the space to where the data is supposed to be written. That doubles the time for each write operation and degrades overall performance.

Here is an excellent article that explains the functioning of Trim and Garbage Collection


Somewhere inside a traditional hard disk is a spinning metal platter where the individual bits and bytes are actually encoded. As data is added to the platter, the disk controller stores it on the outside of the disk first. As new data is added space is used moving towards the inside of the disk last.

With this in mind, there are two effects that cause disk performance to decrease as the disk fills up: Seek Times and Rotational Velocity.

Seek Times

To access data, a traditional hard disk must physically move a read/write head into the correct position. This takes time, called the "seek time". Manufacturers publish the seek times for their disks, and it's typically just a few milliseconds. That may not sound like much, but to a computer it's an eternity. If you have to read or write to a lot of different disk locations to complete a task (which is common), those seek times to can add up to noticeable delay or latency.

A drive that is almost empty will have most of it's data in or near the same position, typically at the outer edge near the rest position of the read/write head. This reduces the need to seek across the disk, greatly reducing the time spent seeking. A drive that is almost full will not only need to seek across the disk more often and with larger/longer seek movements, but may have trouble keeping related data in the same sector, further increasing disk seeks. This is called fragmented data.

Freeing disk space can improve seek times by allowing the defragmentation service not only to more quickly clean up fragmented files, but also to move files towards the outside of the disk, so that the average seek time is shorter.

Rotational Velocity

Hard drives spin at a fixed rate (typically 5400rpm or 7200rpm for your computer, and 10000rpm or even 15000 rpm on a server). It also takes a fixed amount of space on the drive (more or less) to store a single bit. For a disk spinning at a fixed rotation rate, the outside of the disk will have a faster linear rate than the inside of the disk. This means bits near the outer edge of the disk move past the read head at a faster rate than bits near the center of the disk, and thus the read/write head can read or write bits faster near the outer edge of the disk than the inner.

A drive that is almost empty will spend most of it's time accessing bits near the faster outer edge of disc. A drive that is almost full will spend more time accessing bits near the slower inner portion of the disc.

Again, emptying disk space can make the computer faster by allowing the defrag service to move data towards the outside of the disk, where reads and writes are faster.

Sometimes a disc will actually move too fast for the read head, and this effect is reduced because sectors near the outer edge will be staggered... written out of order so that the read head can keep up. But overall this holds.

Both of these effects come down to a disk controller grouping data together in the faster part of the disk first, and not using the slower parts of the disk until it has to. As the disk fills up, more and more time is spent in the slower part of the disk.

The effects also apply to new drives. All else being equal, a new 1TB drive is faster than a new 200GB drive, because the 1TB is storing bits closer together and won't fill to the inner tracks as fast. However, attempting to use this to inform purchasing decisions is rarely helpful, as manufactures may use multiple platters to reach the 1TB size, smaller platters to limit a 1TB system to 200GB, software/disk controller restrictions to limit a 1TB platter to only 200GB of space, or sell a drive with partially completed/flawed platters from a 1TB drive with lots of bad sectors as a 200GB drive.

Other Factors

It's worth noting here that the above effects are fairly small. Computer hardware engineers spend a lot of time working on how to minimize these issues, and things like hard drive buffers, Superfetch caching, and other systems all work to minimize the problem. On a healthy system with plenty of free space, you're not likely to even notice. Additionally, SSDs have completely different performance characteristics. However, the effects do exist, and a computer does legitimately get slower as the drive fills up. On an unhealthy system, where disk space is very low, these effects can create a disk thrashing situation, where the disk is constantly seeking back and forth across fragmented data, and freeing up disk space can fix this, resulting in more dramatic and noticeable improvements.

Additionally, adding data to the disk means that certain other operations, like indexing or AV scans and defragmentation processes are just doing more work in the background, even if it's doing it at or near the same speed as before.

Finally, disk performance is huge indicator of overall PC performance these days... an even larger indicator than CPU speed. Even a small drop in disk throughput will very often equate to a real perceived overall drop in PC performance. This is especially true as hard disk performance hasn't really kept pace with CPU and memory improvements; the 7200 RPM disk has been the desktop standard for over a decade now. More than ever, that traditional spinning disk is the bottleneck in your computer.


All of the other answers are technically correct - however I've always found that this simple example explains it best.

Sorting things is really easy if you have lots of space... but difficult if you don't have the space... computers need the space too!

This classic "15 puzzle" is tricky/time consuming because you only have 1 free square to shuffle the tiles around in to get them in the correct 1-15 order.

hard 15 puzzle

However if the space was much bigger, you could solve this puzzle in well under 10 seconds.

easy 15 puzzle

For anyone that has ever played with this puzzle... understanding the analogy seems to come naturally. ;-)