Serving static files with nginx over NFS?

By setting up a central NFS server you introduce a single point of failure into your design. That alone should be a deal breaker. If not, NFS can be plenty fast enough for a load like this. The critical factors will be having enough RAM to cache files, low latency interconnections (Gig-E or better), and tuning (less so than the previous).

You should also strongly consider using rsync or a similar tool to keep local copies of the static files updates on each individual webserver. Another option might be a SAN or redundant NFS server solution (both of which are going to be much more complicated and costly than the rsync idea).


I use cachefilesd (and a recent linux kernel, with cachefs) to cache NFS files to a local HD. This way, every read in the nfs will copy the file to a /var/cache/fs dir and next reads will be delivered from there, with the kernel checking in the nfs if the content is still valid.

This way you can have a central NFS, but without losing the performance of local files

Cachefilesd will take care of the cleaning of old files when the free size/inodes reach a configured level, so you can serve uncommon data from the NFS and common requests from the HD

Of course, also use a varnish to cache the more common requests and save the nginx/NFS from serving then.

Here is a small cachefilesd howto


The speed depends on many factors:

  • How are your servers going to be connected against the NFS target? A single dual-port SAS disk can utilize 6gbit/s of transfer speed. Keep this in mind if you're planning to use 1gig Ethernet (which you can subtract 20% TCP overhead from).
  • What kind of cache is the NFS server going to get? Are you using a enterprise grade array controller with lots of cache? Read cache is key in this setup
  • How many servers are going to access the same file simultaneously? NFS locking can hurt - badly

The limit of open files via NFS is a limitation of the host operating system. FreeBSD has for example many different tuning options to support a large number of open files, but it depends on the amount of RAM in your server.

An alternative to a central file server is to use synchronization/replication between your web servers (like Chris S suggests). rsync or DRBD might be a great and cost-effective choice.