Choice of filesystem for GNU/Linux on an SD card

Excellent article about flash filesystems.

Important question when talking about flash filesystems is following: What is wear leveling? Wikipedia article. Basically, on flash disks you can write limited number of times until block goes bad. After that, filesystem (if there is no built-in wear leveling management on hardware, as in case of SSDs there usually is) must mark that block as invalid, and avoid using it anymore.

Typical filesystems (for example ReiserFS, NTFS, ext3 and so on) are designed for hard disks, that do not have such limitations.

JFFS2

Includes compression and elegant wear leveling protection.

YAFFS2

  • Single thing that makes the difference: short mount times, after successful umount.
  • Implements write once property: once data is written to one block, there is no need to rewrite it. This is important, as it reduces wear.

LogFS

  • Not very mature, but already included in Linux kernel tree.
  • Supports larger filesystems than JFFS2/YAFFS2 without problems.

UBIFS

  • More mature than LogFS
  • Write caching support
  • On scalability: article. On large disks, better performance than with JFFS2

ext4

If no driver or card (for example SSD drives do have internal wear leveling, at least usually) handle wear leveling, then ext4 is not the best idea, as it is not intended for raw flash usage.

Which one is the best?

Of course, it depends on usage and support. From what I read on the Internet, I would recommend UBIFS. Good support for large filesystems, mature development phase, adequate performance and no huge downsides.


I was facing the same problem and did some research as well. Eventually I decided to go with ext2.

It seems that some SDHC cards implement their own wear-leveling at the hardware layer. If you can get hold of SDHC cards that have wear-leveling buit-in.

Filesystems that provide wear-leveling can interfere with the Flash-level wear-leveling so it can actually be bad for the flash to use them (the IBM article cited above talks about how JFFS does it, so it's clear that that won't work with flash-level WL). I decided I didn't need ext3's journaling since I'm not storing critical data on it and I usually backup regularly anyway (cron).

I also mounted /tmp and /var as tmpfs to speed things up. If you have enough RAM you should do that (but be sure to rotate or delete your logs regularly)

HINT: Mount your ext SD cards with the "noatime" option


I don't know if this fits into your system's profile, but what about using a readonly filesystem plus a read-write partition (or a usb stick that can be replaced easily)? That way you'll have a fast disk for your OS and can replace your rw storage easily when it wears out.

And then there's unionfs. As i understood it it "stacks" different filesystems (i.e. a ro fs on top of a rw fs). If there's a read access unionfs seeks through th stack until it hits the FS containing the file we search for. When writing unionfs searchs for the first writeable FS on the stack und uses it.

I also found these articles that may be interesting: http://www.linux-mag.com/id/7357/ http://www.linux-mag.com/id/7345/

And two articles with tips for using SSDs: http://danweinreb.org/blog/using-solid-state-disks-on-linux http://www.zdnet.com/blog/perlow/geek-sheet-a-tweakers-guide-to-solid-state-drives-ssds-and-linux/9190


Selecting (and sizing) the correct file system is more important than anything else, not only for security but for tons of other reasons people do not usually recognize. Without a file system all processing would go to null.

Very well put response by Olli, and the OP is much dated, but file systems are my pet peeve I could not stay away. superuser.com is not something I visited before, I am not an admin, but I signed up and I am going to visit more.

Things changed a lot since 2011, but even back then I formatted USB cards FAT, and used USB drives to carry 4Gb+ files around. The reason of course was compatibility not security (so much for S in SD, but I use passwords on my 7z's), and I never really carried anything bigger than a CD ISO, they were mostly for SQL scripts and daily-hourly diffs of already encrypted database snapshots squeezed to near death by 7-Zip.

These days I wear any SD out faster than anyone I know. I have a USB stick in some production machines at my employer for hourly automated backup, formatted FAT. I keep an eye on them every day though and - you guessed - back them up religiously by hand (they are off-line secured building ITAR stuff). SSD leveled some of the playing field but I still do not trust them as much as regular HD, and SD is worse than optical. They go bad in an instant and the loss is total.

Any file system which invites the host OS to write randomly to it (NTFS, Recycle Bin) is bad news for an SD. Also, unmounting it helps a lot, no OS is going try accessing unmounted storage, so any file system will do as long as the SD includes a script to unmout itself (one of the standard files on every SD on mine).

Reading an SD is still slow today, so I would recommend something like disk dump (dd) to grab the entire image when mirroring instead of file-by file. dd also let you know when there is something wrong, so your file manager won't go kaboom.

Of course, if your primary purpose is to extend the life of some penny-stock you are going about your business the wrong way. I do what I do not for extending the life of an SD but to keep it from going bad when I am not watching, and there is the difference.

I avoid ext4 or any journaling FS on SD because I do not care when they go bad writing to them, but it sure hurts when a day or so later I cannot read them!