What are the ramifications of increasing the maximum # of open file descriptors

Ubuntu seems to have a default limit of 1024 open file descriptors. Nginx was complaining during some load testing I was doing that the maximum number has been reached and the server basically was unusable.

I'm testing a service that will have to sustain 2-3K requests per second, and each request will be saving to a file (or potentially another store like mysql etc.) These files will be purged within x minutes from the server.

If I run ulimit -n and increase the # of open file descriptors, what ramifications does this have? Does it make the o/s set aside more memory to manage file descriptors, or is there more to it?


I thought some simple googling would produce an answer but I was thwarted. I spent some time looking at the linux source and found file_table.c. files_init at the bottom of that source file is where the open file table is initialized, and it has this comment:

/*
 * One file with associated inode and dcache is very roughly 1K.
 * Per default don't use more than 10% of our memory for files. 
 */ 

the logic in that function sets the system max_files to the larger of the requested maximum (NR_FILES) or 10% of system memory. Apparently every open file consumes around 1K of memory.

However, that doesn't answer the question of how much space it takes to manage the unused file descriptors. I think that is a very small number per file descriptor, just a few bytes.

I don't think there are any real disadvantages to setting the max number of files to a very large number, up to (2^20 from this stack overflow question. However if each open file takes 1K you will obviously run out of system memory long before you hit that limit.

My advice is to go ahead and set your system max open files to a much larger number. If each open file consumes around 1K, then 128,000 open files will only consume around 128MB of your system RAM. That shouldn't be much of a problem on a modern system with many GB of system ram.

Disclaimer: I base all this on my personal sysadmin knowledge and some very superficial reading of the linux source code. I'm no kernel hacker.


in unix type operating systems, file descriptors are used for pretty much anything that reads or writes, io devices, pipes, sockets etc. Typically you modify this limit when using oracle or web servers. The reason that the limit is low is because the numbers came from when users shared the system. The only real harm would be if you had low amounts of memory, but typically you can set this to 30k for high performance servers.