Should I use btrfs or Ext4 for my SSD?

Should I use btrfs (with discard, compress=lzo and space_cache options) or Ext4 (with discard option) for the SSD for my Ubuntu 11.10 (Oneiric) amd64 desktop root partition of my office machine?

/home will be an HDD so fs reliability affects OS not my data.


According to the tests by phoronix it always depends on many factors. In one case Btrfs will be doing much better than EXT4 when reading large files on an SSD. Similarly while considering Disk transaction performance, Ext4 can perform better than the later.

You can have a look through these tests here, here and here (WARNING: Lengthy articles).

But summing altogether, Btrfs right now does not have a quantitative performance advantage over the EXT4 file-system, Even when using in the SSD mode.

So you can choose over Ext4 for now.


For those stumbling on this question in 2016... Use ext4. I tried btrfs and the difference is substantial. Over a 10-day period write IOs to ext4 amounted to 17,800 sectors. Btrfs? 490,400 sectors. Same SSD, identical filesystem, different partitions. Basically, same workload.

Both ext4 and btrfs go "quiet" when there is zero write activity on the drive. That's good.

Ext4 will write the modified data, plus some overhead. Overhead relates to the data written. A 4K write (1 block) pushes about 50-80 blocks of overhead at the next commit. (The ext4 Journal is fully enabled)

Modify a single 4K block on btrfs and you'll push between 4000-5000 blocks of overhead at the next commit. Default commit is 30 seconds, I believe. I used 120.

Now, it depends on how you use the SSD. As root, there is typically a fairly constant, low level, stream of writes going on. Log files, ntp drift files, man db rebuilds, opensm topology updates, etc, etc. Each event will hammer a btrfs drive with another 4000-5000 writes.

The 10 day numbers above are for my "write limited" SSD. The bulk of those 17,800 sectors were the result of a smallish system update. One the btrfs copy did not suffer. My writers are, exactly, ntp drift, opensm topology, and man db updates (nightly). Nothing else hits that disk, except actively initiated things like system upgrades, vim /etc/whatever, etc.

On whole SSDs will suffer a lot of writes, really. I just can't see the point in wasting them just 'cuz the news media is chasing bunnies and rainbows. If you want to pay this price for COW, go for it. For "performance", not so much. It's an SSD and you could probably put the worst "file system" known to man on it, and still get some level of performance - just by brute force. Ext4 is, by far, not the worst file system known to man.

No monthly fs check. Try the script below. It's a 100% hack, won't work for md mountpoints,

#! /bin/bash
dev=`cat /proc/mounts | grep " $1 " | awk '{print $1}'`
x=`basename $dev`
vmnam=`lsblk $dev -o MOUNTPOINT,PKNAME | grep "$1" | awk '{print $2}'`
vmx=`vmstat -d | grep $vmnam | awk '{print $8}'`
lbax=`smartctl -a $dev | grep LBA | awk '{print $10}'`
tmpnam=`mktemp XXX`
echo "Tracking device: $dev, mounted on $1 (vmstat on $vmnam)"
tim=`date +%s`
timx=`date +%s`
while true
do
    vm=`vmstat -d | grep "$vmnam" | awk '{print $8}'`
    lba=`smartctl -a $dev | grep LBA | awk '{print $10}'`
    if [ "$vm" != "$vmx" ]
    then
        tim=`date +%s`
        dif=`dc <<< "$vm $vmx - p"`
        lbad=`dc <<< "$lba $lbax - p"`
        timd=`dc <<< "$tim $timx - p"`
        echo `date` " (sec=$timd) writes=$vm (dif=$dif) (lba=$lbad)"
        vmx="$vm"
        lbax="$lba"
        timx="$tim"
        find "$1" -mount -newer "$tmpnam" -print | grep -v "/tmp"
        touch "$tmpnam" 
    fi
    sleep 1 
done

It will tell you how many blocks were written, according to the drive itself, and exactly which files were updated. Needs root privs. See for yourself. I run SSD on root filesystem, and call the script stat.sh. So... sudo ./stat.sh /


The last time I tested it, and I haven't heard differently yet anywhere, ext4 eats solid-state media. (thumbdrives, solid-state drives, etc.) I don't recommend using it on such a device. Use ext3 instead. For most cases on SSD you won't be able to tell the difference anyway.

BTRFS is not yet quite stable. However, it is stable enough for non-critical applications. It is what I use for making bootable flash drives. If you use compress=zlib and ssd as your mount options the compression will make up for the lower write speeds of most solid-state media and the ssd changes the allocation algorithm to one that performs significantly better on such devices and will make up for any poor wear-levelling by the hardware. The one performance area that's still an issue is that sync calls are slow. This is not a problem for general use, but dpkg calls sync after every operation, so installing and updating software can be slow. BTRFS also offers snapshotting and other advanced features that are quite useful under certain circumstances.

If you decide to go with BTRFS, be sure to use a distro using kernel 3.2.0-2 or later. 3.1.x is workable if necessary. For older kernels you'll need to compile the latest BTRFS modules yourself. The in-built ones are almost stable, but the error-correction doesn't work in the older versions, which can leave you up a creek if something goes wrong. The latest versions have fsck that can actually repair the most common faults.

One final caveat, I have heard reports that swapfiles on a BTRFS filesystem will corrupt it. This issue may well have been fixed, but be sure to check carefully before implementing one.

If you need any help getting a BTRFS setup configured the way you want, let me know. I've done a couple of crazy ones that work rather nicely for specific things.


I would not use ext4 on a solid state drive based on anecdotal evidence and my own experience that suggests ext4 can greatly diminish the lifetime of a SSD due to the number of reads and writes associated with the file system. One article I recently read suggested that unoptimized (accounting for page size, etc.) ext4 on an SSD can cut the disk life in half. After a week of trouble shooting, I've come to the conclusion that my own SSDs have only lasted eight months due to this issue. If you use an SSD, do lots of reading on how to optimize the file system based on things like flash page size which may be different than the typical cylinder size the file system is set up for.