Restoring performance and estimating life of a used SSD drive?
On Linux, simply run
hdparm --trim-sector-ranges start:count /dev/sda
passing the block ranges you want to TRIM instead of start
and count
and the SSD device in place of /dev/sda
.
It has the advantage of being fast and not writing zeros on the drive. Rather, it simply sends TRIM commands to the SSD controller letting it know that you don't care about the data in those blocks and it can freely assume they are unused in its garbage collection algorithm.
You probably need to run this command as root. Since this command is extremely dangerous, as it can immediately cause major data loss, you also need to pass --please-destroy-my-drive
argument to hdparm
(I haven't added this to the command line to prevent accidental data loss caused by copy and paste.)
In the above command line, /dev/sda
should be replaced with the SSD device you want to send TRIM commands to. start
is the address of the first block (sector) to TRIM, and count
is the number of blocks to mark as free from that starting address. You can pass multiple ranges to the command.
Having personally done it with hdparm v9.32 on Ubuntu 11.04 on my laptop with a 128GB Crucial RealSSD C300, I have to point out an issue: I was not able to pass the total number of disk blocks (0:250069680) as the range. I manually (essentially "binary searched" by hand) found a large enough value for block count that worked (40000) and was able to issue TRIM commands on a sequence of 40000 ranges to free up the entire disk. It's possible to do so with a simple shell script like this (tested on Ubuntu 11.04 under root):
# fdisk -lu /dev/sda
Disk /dev/sda: 128.0 GB, 128035676160 bytes
255 heads, 63 sectors/track, 15566 cylinders, total 250069680 sectors
...
to erase the entire drive, take that total number of sectors and replace 250069680 in the following line with that number and run (add --please-destroy-my-drive
):
# i=0; while [ $i -lt 250069680 ]; do echo $i:40000; i=$(((i+40000))); done \
| hdparm --trim-sector-ranges-stdin /dev/sda
And you're done! You can try reading the raw contents of the disk with hexedit /dev/sda
before and after and verify that the drive has discarded the data.
Of course, even if you don't want to use Linux as the primary OS of the machine, you can leverage this trick by booting off a live CD and running it on the drive.
First off, let's begin by understanding just what it is that causes the performance degradation. Without knowing this, many people will suggest inadequate solutions (as I already see happening). The crux of this entire predicament basically comes down to the following fact, as cited from Wikipedia. Remember it, it's important:
With NAND flash memory, read and programming operations must be performed page-at-a-time while unlocking and erasing must happen in block-wise fashion.
SSD's are made up of NAND flash, and flash consists of "blocks". Each block contains many "pages". For the sake of simplicity, lets imagine we just purchased a shiny new SSD that contains a whopping single block of memory, and that block consists of 4 empty pages.
For the sake of clarity, I differentiate between empty pages, used pages, and deleted pages with ∅, 1, and X. The key being that there is a difference between each of these from the controllers perspective! It is not as simple as 1's and 0's. So, to start, the pages on our fresh drive look like so:
∅, ∅, ∅, ∅ (all empty)
Now, we go to write some data to the drive, and it ends up getting stored in that first page, thus:
1, ∅, ∅, ∅
Next, we write a bit more data, only this time enough that it requires two pages and so it ends up being stored in the 2nd and 3rd page:
1, 1, 1, ∅
We are running out of space! We decide we don't really need the initial data we wrote, so lets delete it to make room.:
X, 1, 1, ∅
Finally, we have another large set of data that we need to store which will consume the remaining two pages. THIS IS WHERE THE PERFORMANCE HIT OCCURS IN DRIVES WITHOUT TRIM!! Going from our last state to this:
1, 1, 1, 1
...requires more work than most people realize. Again this is due to the fact that flash can only erase in a block-wise fashion, not page-wise which is precisely what the final transition above calls for. The differentiator between TRIM and non-TRIM based SSD's is when the following work is performed!
Since we need to make use of an empty page and a deleted page, the SSD needs to first read the contents of the entire block into some external storage/memory, erase the original block, modify the contents, and then write those contents back into the block. It's not as simple as a "write", instead it's now become a "read-erase-write". This is a big change, and for it to happen while we are writing lots of data is probably the most inopportune time for it to occur. It could all be avoided if that "deleted" page were recovered ahead of time, which is precisely what TRIM is intended to do. With TRIM, the SSD either recovers our deleted pages immediately after the delete or at some other opportune time that it's TRIM algorithms deem appropriate. The important part though is that with TRIM it doesn't happen when we are in the middle of a write!
Without TRIM, we ultimately can't avoid the above scenario as we fill up our drives with data. Thankfully some newer SSD's go beyond just TRIM and effectively do the same thing as TRIM in the background at a hardware level without the necessary ATA commands (some call this garbage collection). But for those of us unlucky enough not to have either, it is important to know that writing zeros to the entire drive is not sufficient for reclaiming original performance!!!!! Writing all zeros to the drive does not indicate to the controller that the page in flash is free for writing. The only way to do that on a drive which does not support TRIM is to invoke the ATA secure erase command on your drive using a tool such as HDDErase (via Wayback Machine).
I believe there were some early SSD's that only supported TRIM upon deleting partitions or upon such things as Windows 7's "diskpart clean all", and not upon the deletion of individual files. This may be a reason why an older drive appeared to regain performance upon executing that command. This seems a bit hazy to me though...
Much of my knowledge of SSD's and hardware/gadgets in general comes from anandtech.com. I thought he had a great writeup explaining all of this but for the life of me I cannot find it!