understanding max file descriptors for linux and nginx, and best value for worker_rlimit_nofile

worker_rlimit_nofile will set the limit for file descriptors for the worker processes as oppose to the user running nginx. If other programs running under this user will not be able to gracefully handle running out of file descriptiors then you should set this limit slightly less then what is for the user.

First, What is using your file descriptors?

  1. Each active connection to a client
  2. Using proxy_pass? That will open a socket to the host:port handling these requests
  3. Using proxy_pass to a local port? Thats another open socket. (For the owner of that process)
  4. static files being served by nginx

Why would the limit per worker be less than the OS limit?

This is controlled by the OS because the worker is not the only process running on the machine. To change it for the user running nginx see below. It would be very bad if your workers used up all of the file descriptors available to all processes, don't set your limits so that is possible.

#/etc/sysctl.conf
#This sets the value you see when running cat  /proc/sys/fs/file-max
fs.file-max = 65536"


#/etc/security/limits.conf
#this sets the defaults for all users
* soft nofile 4096
* hard nofile 4096

#This overrides the default for user `usernamehere`
usernamehere soft nofile 10240
usernamehere hard nofile 10240

After those security limit changes I believe I still had to increase the softlimit for the user using ulimit.

How do I find out what my limit is now?

ulimit -a Will display all the limits associated with the user you run it as.


Have to check the source to be honest, but it's fairly low.

I used worker_rlimit_nofile 15000; and had no issues, you can safely increase it, though, the chance of running out of file descriptors is minuscule.