Which UNIX filesystem do you use and recommend for servers?

For Linux, I use ext3 on top of LVM. Use of LVM makes it easy to extend a partition later if I need more space. There are more choices, but my needs have never been extreme enough for me to have to performance test to see what was best for my circumstances.

Part of my reasoning for sticking with ext3 is that -- as the default filesystem for many varieties of Linux -- it will be one of the most thoroughly tested in a variety of different situations.

Someone with special needs -- a high performance server for example, or a filesystem that needs to hold an unusually large number of files or mostly very tiny files or mostly huge files -- should try a couple different filesystems to see what serves their needs better.


If you use a UPS and are very confident that there will be no sudden losses of power or other conditions forcing the computer to hard-shutdown I would recommend XFS in many situations. It is fast for most uses, though it has some weaknesses in handling many small files at once. However, it tends to lose data in the event of an uncontrolled shutdown. This filesystem is available in Linux and IRIX.

Ext3 is the most 'stable' choice, having been in Linux for many years and having had virtually no bugs for an extended period of time. It does suffer some performance and space-efficiency penalties, mainly due to being block- rather than extent-based. This filesystem is available in Linux.

ReiserFS (3) is what I personally use, as it is quite stable in the event of uncontrolled shutdowns (which my laptop sees a lot of), as well as being space-efficient and fast. However, if it does run into issues, the contents of multiple files may end up intermingled - a possible security problem. (XFS, by comparison, zeroes out corrupted files. This makes recovery harder, but is more secure). This filesystem is available in Linux.

I recommend avoiding Reiser4. While it is fast, it is unstable (and has become more so), partially due to being rejected from the official kernel and being maintained out-of-tree.

ZFS is the new kid on the block. It is performant and feature-rich, but is relatively untested. It does have many useful aspects, one of the largest being snapsotting. This can be used to take a snapshot of the file system, which remains consistent while, say, a backup program archives its data. This filesystem is available in Solaris and (to a degree) NetBSD.

Also, while it isn't a file system, for any Linux-based servers I would recommend layering any file systems on top of LVM, the logical volume manager. It makes disk administration much easier. EVMS (which uses LVM internally) is also an option, and is somewhat easier to use, but has been mostly unmaintained for some time now.


On Solaris, if zfs is available, it is the clear winner.

On Linux, if xfs is not easily available (e.g. RedHat Enterprise / CentOS), ext3 is the clear winner.

On Linux, if xfs is available, it is the clear winner.

Reiserfs was never mature enough for serious use, and now never will be. The only filesystem under development that attempts to come close to its functionality is btrfs.


Really, I would go with ext3 for the moment, it's "fine" for near everything, and when fine isn't good enough you would probablty want to do more detailed research centered around your specific need.

Anyway, in a nutshell:

  • ext3 = most supported, most used
  • ext4 = newer, backwards compatible with ext3 (mostly), but does file commits in a certain arguably non-safe way
  • xfs = oldest (iirc) still being used, very stable, very good, not highly used, also has a similar issue with file commits that ext4 has
  • reiserfs = somewhat dead after... ahh, yeah. Bad jokes aside, was good for lots of small files but not for much else.
  • zfs = new, tasty, crazy stable, crazy resiliant. IMO really only useful for large amounts of data where you really know what you're doing, has not-that-great linux support (FUSE only).
  • jfs = actually, I know nothing about this.

The problem with ext4, xfs and others is around how they do commit data to the drive and when they "flush" (actually commit the data as opposed to caching the commit doing it later). You can read up more about it, but the general gist is that they are arguably less safe with your data but faster for it. This of course can all be configured etc.


What is the planned use (and operating system? For a boot drive or for file storage?

For example, If running a Mac, you'll need to use HFS+ or UFS for a bootable drive.

ZFS gives you a performance hit but increases the integrity of data, offers RAID like features and allows you to create a single volume over different sized disks.