Which linux filesystem works best with SSD
Solution 1:
Short answer
-
Choose ext4, and use FITRIM (see below). Also use the
noatime
option if you fear "SSD wear". -
Don't change your default I/O scheduler (CFQ) on multi-applications servers, as it provides fairness between processes and has automatic SSD support. However, use Deadline on desktops to get better responsiveness under load.
-
To easily guarantee proper data alignment, the starting sector of each partition must be a multiple of 2048 (= 1 MiB). You can use
fdisk -cu /dev/sdX
to create them. On recent distributions, it will automatically take care of this for you. -
Think twice before using swap on SSD. It will probably be much faster compared to swap on HDD, but it will also wear the disk faster (which may not be relevant, see below).
Long answer
- Filesystems:
Ext4 is the most common Linux filesystem (well maintained). It provides good performance with SSD and supports the TRIM (and FITRIM) feature to keep good SSD performance over time (this clears unused memory blocks for quick later write access). NILFS is especially designed for flash memory drives, but does not really perform better than ext4 on benchmarks. Btrfs is still considered experimental (and does not really perform better either).
- SSD performance & TRIM:
The TRIM feature clears SSD blocks that are not used anymore by the filesystem. This will optimize long-term write performance and is recommended on SSD due to their design. It means that the filesystem must be able to tell the drive about those blocks. The discard
mount option of ext4 will issue such TRIM commands when filesystem blocks are freed. This is online discard.
However, this behavior implies a little performance overhead. Since Linux 2.6.37, you may avoid using discard
and choose to do occasional batch discard with FITRIM instead (e.g. from the crontab). The fstrim
utility does this (online), as well as the -E discard
option of fsck.ext4
. You will need "recent" (as of writing) versions of these tools however.
- SSD wear:
You might want to limit writes on your drive as SSD have a limited lifetime in this regard. Don't worry too much however, today's worst 128 GB SSD can support at least 20 GB of written data per day for more than 5 years (1000 write cycles per cell). Better ones (and also bigger ones) can last much longer: you will very probably have replaced it by then.
If you want to use swap on SSD, the kernel will notice a non-rotational disk and will randomize swap usage (kernel level wear levelling): you will then see a SS
(Solid State) in the kernel message when swap is enabled:
Adding 2097148k swap on /dev/sda1. Priority:-1 extents:1 across:2097148k SS
- I/O Schedulers:
Also, I agree with most of aliasgar's answer (even if most of it has been -illegally?- copied from this website), but I must partly disagree on the scheduler part. By default, the deadline scheduler is optimized for rotational disks as it implements the elevator algorithm. So, let's clarify this part.
Long answer on schedulers
Starting from kernel 2.6.29, SSD disks are automatically detected, and you may verify this with:
cat /sys/block/sda/queue/rotational
You should get 1
for hard disks and 0
for a SSD.
Now, the CFQ scheduler can adapt its behavior based on this information. Since linux 3.1, the kernel documentation cfq-iosched.txt
file says:
CFQ has some optimizations for SSDs and if it detects a non-rotational media which can support higher queue depth (multiple requests at in flight at a time), [...].
Also, the Deadline scheduler tries to limit unordered head movements on rotational disks, based on the sector number. Quoting kernel doc deadline-iosched.txt
, fifo_batch
option description:
Requests are grouped into ``batches'' of a particular data direction (read or write) which are serviced in increasing sector order.
However, tuning this parameter to 1 when using a SSD may be interesting:
This parameter tunes the balance between per-request latency and aggregate throughput. When low latency is the primary concern, smaller is better (where a value of 1 yields first-come first-served behaviour). Increasing fifo_batch generally improves throughput, at the cost of latency variation.
Some benchmarks suggest that there is little difference in performance between the different schedulers. Then, why not recommend fairness? when CFQ is rarely bad in the bench.
However, on desktop setups, you will usually experience better responsiveness using Deadline under load, due to its design (probably at a lower throughput cost though).
That said, a better benchmark would try using Deadline with fifo_batch=1
.
To use Deadline on SSDs (and NVMe & flash memory drives) by default, you can create a file, say /etc/udev.d/99-ssd.rules
as follows:
# all non-rotational block devices use 'deadline' scheduler
# mostly useful for SSDs on desktops systems
SUBSYSTEM=="block", ATTR{queue/rotational}=="0", ACTION=="add|change", ENV{DEVTYPE}=="disk", ATTR{queue/scheduler}="deadline"
Solution 2:
Filesystem EXT4 + TRIM:
- EXT4 with TRIM improves performance by reducing unnecessary write cycles to the SSD drive as they limited write-rewrite cycles.
- Ubuntu and some other Linux flavors support EXT 4 with TRIM out of the box.
SWAP Partition:
- Make sure you do not have a SWAP space on the SSD, again to reduce the write cycles.
- If you have a mechanical drive, then you should create a SWAP space on the mechanical drive, and avoid having it on the SSD.
Partition Alignment:
- The partition should start on a clean 1MB boundary so that block size of the Filesystem aligns with the block size of the SSD.
So use EXT4 + TRIM with a SWAP on a mechanical hard drive or no SWAP on SSD.
The above can be implemented by referring to the Source: How to Maximize SSD Performance.