Fix "fork: resource temporarily unavailable" on OS X / macOS

I ran into some problems on my Mac running OS X El Capitan 10.11.6 that appeared to be related to resource limits. I was seeing things like error: cannot spawn or unable to fork or

-bash: fork: retry: Resource temporarily unavailable
-bash: fork: Resource temporarily unavailable

From the fact that my system was not completely freezing or spewing errors all over the place, I expected that what was happening was that I had reached my limit on the number of processes I can run.

$ ulimit -u
709

709 is a lot of processes, but I was running a lot of servers on my Mac, so it seemed possible I was using that many. I tried this command, which prints out how many processes the user is running (actually 1 more than the number of processes)

$ ps -xu $(id -ru) | wc -l
fork: Resource temporarily unavailable

OK, I could not run that command, which creates 3 processes, so I was probably at the process limit. And the error message did not start with -bash so it was not about ulimit either. I was hitting the systemwide limit kern.maxprocperuid.

I was able to confirm this by running a shell as root:

$ id -ru
501
$ sudo -i
root# ps -xu 501 | wc -l
706

Yes, 706 < 709, but mdworker processes routinely come and go, so being close to to the limit was evidence enough.

How do I increase the limit on the number of processes I can run? I searched around for answers and found articles that suggested I create a file /Library/LaunchDaemons/limit.maxproc.plist with this content:

<?xml version="1.0" encoding="UTF-8"?> 
<!DOCTYPE plist PUBLIC "-//Apple/DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> 
 <plist version="1.0">
 <dict>
 <key>Label</key>
 <string>limit.maxproc</string>
 <key>ProgramArguments</key>
 <array>
  <string>launchctl</string>
  <string>limit</string>
  <string>maxproc</string>
  <string>2048</string>
  <string>4096</string>
 </array>
 <key>RunAtLoad</key>
 <true />
 <key>ServiceIPC</key>
 <false />
 </dict>
</plist>

I did that and rebooted and now

$ ulimit -u
2048

Which would be great, except

$ sysctl kern.maxproc
kern.maxproc:
1064

THIS IS DANGEROUS. What it means is that the entire system is still limited to a maximum of 1064 processes, but an individual user is allowed to create more than that. When the system hits the maximum number of processes, it generally crashes quickly and in a very bad way. It is therefore very important that no user be allowed to create enough processes to fill up the process table, which is why on the Mac users are limited to 2/3 of the total number of processes.

I found similar suggestions regarding the maximum number of open files maxfiles and maxfilesperproc, which are even more dangerous, because a single process should get nowhere near being allowed to open the systemwide limit on the total number of open files.

The other issue is that if 709 processes was not enough, 1000 processes is really unlikely to be enough either.

So, what is the right way to fix "Too many open files" or "fork: resource temporarily unavailable" on OS X El Capitan?


Solution 1:

If your computer is routinely bumping up against process limits and it is not due to some runaway program or malware, then instead of trying to override the defaults, you should switch to a different set of defaults known as "Performance Mode" (a.k.a. "Server Performance Mode" or perfmode or serverperfmode).

To do this on El Capitan (and later macOS versions until further notice), you do not need to create new files or mess with turning SIP on or off, all you need to do is

$ sudo nvram boot-args="serverperfmode=1 $(nvram boot-args 2>/dev/null | cut -f 2-)"

This is officially supported by Apple, and changes the default configuration of your computer from an interactive computer you are going to use for email and web browsing to a server you are going to use to run a lot of non-interactive software.

The new limits depend on how much memory you have and require you have at least 16 GiB of memory installed, but at a minimum, you will see your user limit on the number of processes you can run increase from 709 to 3,750 and many other limits are increased. Most importantly, systemwide limits will also increase so that a single rogue user or program will not be able to crash the system through resource exhaustion.

This is a much better way to increase your system limits because it increases all the interrelated limits in a consistent way, taking into account the capabilities of your specific computer. It's also much easier to do. The only downside is that if you ever reset your NVRAM, you will need to run the nvram command again to turn performance mode back on. But that is also a plus, because if the new settings really get you into trouble, all you need to do to undo them is reset your NVRAM, which is a normal step in recovering when things go really bad. If you instead created the LauchDaemons plist files some people suggest, you would have to remember about them to remember to edit or delete them to get back to a standard configuration.