Too many open files: how many are open, what they are, and how many can the JVM open

I'm getting this exception in Java:

java.io.FileNotFoundException: (Too many open files) 

I'm looking for the ways to eliminate this problem.

This error obviously indicates that JVM has allocated too many handles and underlying OS won't let it have more. Either I've got leak somewhere with improperly closed connections/streams.

This process runs for days non-stop and eventually throws the exception. It repeatedly happens after 12-14 days of up-time.

How do you fight this? Is there a way to get a list of allocated handles in JVM or track when it hits certain amount? I'd love to have them printed and see how it grows and when. I can't use a profiler because it's a production system and have difficulties to reproduce it in development. Any suggestion?

I am monitoring free heap size and raising an "alarm" when it approaches 1% of the total specified in -Xmx. I also know that if my thread count hits above 500, then something definitely goes out of hand. Now, is there a way to know that my JVM allocates too many handles from OS and doesn't give them back, e.g. sockets, opened files, etc. If I'd knew that, I'd know where to look and when.


You didn't say which OS you are running on, but if you are running on Linux you can use the lsof command

lsof -p <pid of jvm>

That will list all the files opened by the JVM. Or if you are running on Windows you can Process Explorer which will show all the open files for all the processes.

Doing this will hopefully allow you to narrow down which bit of the code is keeping the files open.


Since you are on Linux, I'd suggest, that you check the /proc-Filesystem. Inside proc, you will find a folder with the PID of your process containing a folder calld 'fd'. If your process id is 1234, the path is be

/proc/1234/fd

Inside that folder, you will find links to all opened files (do a 'ls -l'). Usually, you can tell by the filename which library / code might open and not close the file.


So, full answer (I combined answers from @phisch and @bramp). If you want to check all processes, you should use sudo. Also it's nice to save result to file - lsof is not cheap + this file could be useful for further investigation.

sudo lsof > lsof.log

Show bad guys (with UPDATE from @Arun's comment):

cat lsof.log | awk '{print $1 " " $2 " " $5}' | sort | uniq |awk '{ print $2 " " $1; }' | sort -rn | uniq -c | sort -rn | head -5

    2687 114970 java
    131 127992 nginx
    109 128005 nginx
    105 127994 nginx
    103 128019 nginx

Save list of file descriptors to file as well:

sudo ls -l /proc/114970/fd > fd.log

Show top open files:

cat fd | awk '{ print $11 }' | sort -rn | uniq -c | sort -rn | head -n20