Is this a sensible way to scale Nginx for static content serving?

NFS does not scale. It adds latency to every request and will eventually become too big a bottleneck. We have a similar issue at work, but with photos (so, much larger files) and wrote our own software to shard and distribute them. For a few GB of files like you have, you might be able to get away with the upload process doing an HTTP PUT to all servers and doing resyncs when servers have been offline.

Or tackle it another way: have a (set of) central server(s) with all files and caching reverse proxies (squid, pound, varnish) that actually serve the files to customers.

And you're not crazy not to use a CDN. You're crazy if you don't investigate whether it's worthwhile though :-)


Use cachefilesd (and a recent linux kernel, with cachefs) to cache NFS files to a local HD. This way, every read in the nfs will copy the file to a /var/cache/fs dir and next reads will be delivered from there, with the kernel checking in the nfs if the content is still valid.

This way you can have a central NFS, but without losing the performance of local files

Cachefilesd will take care of the cleaning of old files when the free size/inodes reach a configured level, so you can serve uncommon data from the NFS and common requests from the HD

After setting up this, use a varnish to deliver the content, it will cache most used requests, saving a ton of requests to the nginx/NFS.

Here is a small cachefilesd howto.


I would recommend getting a single (potentially dedicated) server for this, instead of using several individual VPS servers and separate nginx instances connected through nfs. If you're thinking about using VPS and NFS, I don't think your concerns about scalability are justified.

nginx does almost all of its caching through the filesystem of the machine, so, if you're going to use nginx for this, you must ensure to have an operating system that has an excellent filesystem performance and caching. Make sure your kernel has enough vnodes etc.

If you're still thinking about separate machines (my suggestion, as above, is to use one machine with one nginx), then it might make sense to investigate varnish. Varnish does all of its caching in virtual memory, so, you wouldn't have to worry about vnodes or cache inefficiencies with smaller files. Since it's using virtual memory, its cache can be as large as physical memory + swap.

I would highly recommend against squid. If you want to know why, just look at a varnish presentation, which describes why virtual memory is the best way to go for an acceleration proxy. But varnish only does acceleration, so if you're using a single host with static files and good filesystem caching (e.g. FreeBSD), then nginx would probably be the best choice (otherwise, with varnish, you'll end up with the same content double-cached in multiple places).


No production server design can have a single fault point.

Therefore you need at least two machines as load balancers, you can use a load balancer like HAProxy. It has all the features you may need, check this HAproxy arquitecture example. The actual request load you will face is "lots of small files requests" over a NFS storage system.

The number of caches is dependent on your resources, and client requests. HAProxy can be configured to add or remove cache servers.

The NFS file request is the most demanding operation, therefore you need a form of caching in your "cache" machines.

The cache server has 3 storage layers, you want the most common files to be available locally, and preferably in RAM.

  • NFS, by far the slowest. (REMOTE)
  • Local Storage, fast. (LOCAL)
  • Ram, ultra fast. (LOCAL)

This can be solved by nginx, squid or varnish..

nginx can cache locally files using SlowFs, this is a good slow fs tutorial

Nginx with this system stores files in the local filesystem disk and serves them from there. You can use PURGE to remove a modified file from cache. It is as simple as making a request with the word "purge" in the request string.

Nginx with Slow FS uses the ram the OS provides, increasing the nginx ram available by the OS will improve the request average speed. However if your storage exceeds the server ram size you still need to cache the files in the local filesystem.

Nginx is a multipurpose server it is not extremely fast. At least not as fast as static caching servers such as squid or varnish. However if your problem is the NFS, then Nginx solves 90% of the problem.

Squid and Varnish are very fast and have apis to remove files from cache.

Squid uses ram and the local filesystem for cache. Varnish uses ram for cache.

Squid is old and most benchmarks show that varnish is faster than squid dispaching static files.

However when varnish crashes the ram cache is lost and the whole server can take a lot of time recovering. Therefore a crash is a big problem for varnish.

Squid handles crashes better because it also uses the local storage disk, and can reboot some cache from there instead of using the NFS.

For optimal performance serving small static files you can use Nginx and Squid OR Varnish.

Other file sizes require a different approach.