Python Subprocess: Too Many Open Files

Solution 1:

In Mac OSX (El Capitan) See current configuration:

#ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 256
pipe size            (512 bytes, -p) 1
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 709
virtual memory          (kbytes, -v) unlimited

Set open files value to 10K :

#ulimit -Sn 10000

Verify results:

#ulimit -a

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 10000
pipe size            (512 bytes, -p) 1
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 709
virtual memory          (kbytes, -v) unlimited

Solution 2:

I guess the problem was due to the fact that I was processing an open file with subprocess:

cmd = "enerCHARMM.pl -par param=x,xtop=topology_modified.rtf,xpar=lipid27_modified.par,nobuildall -out vdwaals {0}".format(cmtup[1])
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)

Here the cmd variable contain the name of a file that has just been created but not closed. Then the subprocess.Popen calls a system command on that file. After doing this for many times, the program crashed with that error message.

So the message I learned from this is

Close the file you have created, then process it

Solution 3:

You can try raising the open file limit of the OS:

ulimit -n 2048

Solution 4:

As others have noted, raise the limit in /etc/security/limits.conf and also file descriptors was an issue for me personally, so I did

sudo sysctl -w fs.file-max=100000 

And added to /etc/sysctl.conf:

fs.file-max = 100000

Reload with:

sudo sysctl -p

Also if you want to make sure that your process is not affected by anything else (which mine was), use

cat /proc/{process id}/limits 

to find out what the actual limits of your process are, as for me the software running the python scripts also had its limits applied which have overridden the system wide settings.

Posting this answer here after resolving my particular issue with this error and hopefully it helps someone.