Over-provisioning an SSD - does it still hold?

Windows will generally use TRIM. This means as long as you have X% free space on the filesystem, the drive will see X% as unallocated.[*] Over-provisioning not required.

Exception: historically, SSDs with Sandforce controllers/firmware have not restored full performance after TRIM :(.

Performance loss on the full drive can be significant, and more so than some other drives. This will be associated with high write amplification, and hence increases wear. Source: Anandtech reviews.

So it's necessary if and only if

  • you're not sure that TRIM will be used. AFAIK it's still not enabled by default on Linux, because of performance issues with a few old & badly-behaving drives.
  • OR you're worried about filling a Sandforce drive (and that the content won't be amenable to compression by the smart controller).

It's not too hard to enable TRIM on Linux, and you're unlikely to notice any problems.

Fortunately, several of the most popular brands make their own controller. The Sandforce controllers are not as popular as they used to be. Sandforce issues make me skeptical about that specific "smart" controller design, which was very aggressive for its time. Apologies to Sandforce but I don't have a reference for the exact controller models affected.


[*] Filesystems like having plenty of free space too, to reduce fragmentation. So TRIM is great, because you don't have to add two safety margins together, the same free space helps both of them :). The drive can take advantage of the unallocated space to improve performance, as well as avoiding high wear as you say.


Modern SSD controllers are smart enough that overprovisioning is not typically necessary for everyday use. However, there are still situations, primarily in datacenter environments, where overprovisioning is recommended. To understand why overprovisioning can be useful, it is necessary to understand how SSDs work.

SSDs must cope with the limitations of flash memory when writing data

SSDs use a type of memory called NAND flash memory. Unlike hard drives, NAND cells containing data cannot be directly overwritten; the drive needs to erase existing data before it can write new data. Furthermore, while SSDs write data in pages that are typically 4 KB to 16 KB in size, they can only erase data in large groups of pages called blocks, which can be several hundred KBs to several MBs in size in modern SSDs.

NAND also has a limited amount of write endurance. To avoid rewriting data unnecessarily in order to erase blocks, and to ensure that no block receives a disproportionate number of writes, the drive tries to spread out writes, especially small random writes, across all of the underlying space. If the writes replace old data, it marks the old pages as invalid. Once all the pages in a block are marked invalid, the drive is free to erase it without having to rewrite valid data.

SSDs need internal free space to function optimally, but not every workload is conducive to maintaining free space

Because each block can contain varying amounts of invalid data waiting to be erased, the amount of free space that is internally available can be significantly less than what is logically available to the operating system.

A drive with little or no internal free space remaining may be forced to erase blocks immediately to allow new data to be written. Any valid data in those blocks must be rewritten into free pages in other blocks. If there aren't many free pages left, more blocks will need to be erased and rewritten, and so on. This process can occur several times for each individual write operation, especially if the drive is under a continuous stream of writes.

Such rewriting means that the total amount of data written to the NAND is greater than the actual amount sent to the drive. This phenomenon is called write amplification, and it can significantly degrade SSD performance and endurance. Write amplification is especially pronounced with random write-intensive workloads such as online transaction processing (OLTP). Applications that give more the drive more idle time, including everyday consumer usage, allow for background data moves that require less rewriting (and thus lower write amplification) than would be necessary if it had to make space for new data right away.

To reduce write amplification, most modern systems support a command called TRIM, which tells the drive which sectors no longer contain valid data and can safely be erased. This is necessary because the drive would otherwise need to assume that data logically deleted by the operating system is still valid, hindering its ability to maintain adequate internal free space.

But TRIM cannot completely eliminate write amplification. With wear leveling, random writes are scattered across all of the underlying NAND, so some amount of rewriting is inevitable even if the drive is logically nowhere near full. Furthermore, older operating systems and certain external enclosures may not support the TRIM command. In rare cases, a drive may appear to support TRIM but fail to internally free space, rendering the command useless.

Modern SSDs experience significantly less write amplification than older drives but some workloads can still benefit from overprovisioning

The earliest SSDs had immature firmware that tended to rewrite data much more than is necessary. Early Indilinx and JMicron controllers (the JMF602 was infamous for stuttering and abysmal random write performance) suffered from extremely high write amplification under intensive random-write workloads, potentially in excess of 100x. (Imagine writing over 100 MB of data to the NAND when you're just trying to write 1 MB!). Newer controllers, with the benefit of higher processing power, improved flash management algorithms, and TRIM support, can handle these situations much better, although heavy random-write workloads can still cause write amplification in excess of 10x in modern SSDs.

Overprovisioning provides a guarantee that there is some amount of free space available to handle random writes and minimize forced rewriting of data. All SSDs are overprovisioned to at least a minimal degree. Some use only the difference between GB and GiB to allow about 7% of the raw capacity to be used as internal free space. Conversely, drives intended for special applications may have significantly more overprovisioning. For example, an enterprise SSD for write-heavy OLTP or database workloads may have 512 GiB of physical NAND yet have an advertised capacity of 400 GB, rather than the 480 to 512 GB typical of consumer SSDs with similar amounts of NAND.

If your workload is particularly demanding, or if you're using the drive in an environment where TRIM is not supported, you can manually overprovision space by partitioning the drive so that some space is unused. For example, you can partition a 512 GB SSD to 400 GB and leave the remaining space unallocated, and the drive will use the unallocated space as spare space. Do note, however, that this unallocated space must be trimmed if it was previously written to; otherwise, it will have no benefit as the drive will see that space as occupied. (Partitioning utilities should be smart enough to do this, but I'm not 100% sure; see "Does Windows trim unpartitioned (unformatted) space on an SSD?")

If you're just a normal consumer, overprovisioning is generally not necessary

In typical consumer environments where TRIM is supported, the SSD is less than 70-80% full, and is not getting continuously slammed with random writes, write amplification is typically not an issue and overprovisioning is generally not necessary.

Ultimately, most consumers will not write nearly enough data to disk to wear out the NAND within the intended service life of most SSDs, even with high write amplification, so it's not something to lose sleep over.