Which is the best file system to run a web server and a database on debian?

Solution 1:

Do your extX filesystems have dir_index enabled? (run tune2fs -l /dev/XXX to check) If not, try enabling that as a first step.

XFS handles massive directories well.

Solution 2:

As James noted, ext{2,3} handles huge directories extremely well with the appropriate flags; But.... it sometimes doesn't feel like that.

Specifically:

  • A modern filesystem can do a very fast and scalable mapping from name to inode, meaning that it can (nearly) instantly open any given file no matter how big the directory it's in. Also answering any query (existence, permissions, size, owners, etc) about a specific path is largely unaffected by directory size.

But...

  • Any operation that works on the directory as a whole will have to linearly iterate all the files there, which can be really slow. For instance, ls by default sorts alphabetically the filenames; so it has to first read them all, then sort, then display, easily taking several minutes on multi-thousand-files directories. Another common issue is wildcard matching, which also has to read all existing filenames to return the matching subset.

conclusion: if you only use precisely-specified paths, any good filesystem will do. if you use wildcards, or frequently operate on the whole directory (listing, copying or deleting it), any filesystem will be too slow on huge directories.