Why is macOS limited to 1064 or 2088 processes?
Solution 1:
As far as I can tell, the answer is part of the long history of computing.
macOS is based on Darwin, which is the core of Apple's operating system, and which itself is based on xnu, which is a hybrid of FreeBSD layered over Mach, which are based on... until you go back really far to, I don't know, Ada Lovelace? Believe it or not, that is an oversimplification of the macOS history, but it covers the points needed to understand what follows.
In the days when FreeBSD was being developed, memory was expensive, so there was strong incentive to use as little as possible, but not so expensive that you could not build a system shared by a University department. The idea was to have 1 computer that could be used by many people at once, and even if the memory was expensive, the users were only consuming was some memory and disk space, rather than a whole computer, so you wanted to be able to serve as many people as you could without compromising the whole system.
The kernel has a section of memory reserved to hold information for each process, and the size of that table was what limited the total number of processes the system could run at once, so you wanted to keep that table as small as possible, but at the same time big enough to run enough processes to serve all your users. Other kernel memory reservations, such as the number of buffers allocated to store network traffic temporarily, also needed to scale with the number of concurrent users, so FreeBSD introduced a tuning parameter called MAXUSERS
, which was not a real limit on how many users the system would handle, but rather a tuning parameter that you set to indicate the maximum number of users you intended to handle at one time. It adjusted the balance between memory allocated to the kernel and memory available to the users.
Prior to graphical user interfaces, individual users on a Unix system typically ran very few processes. They ran a terminal shell, which maybe ran an editor, an email program, a compiler, and some other program, each of which was one process. The email program and compiler would run some other processes. At some point someone estimated that a user would likely need to run no more than 16 processes at once.
Someone (probably the same person) estimated that the system itself could be safely limited to about 20 processes.
This provided the basis for scaling the size of the process table based on the number of concurrent users the system would support:
#define NPROC (20 + 16 * MAXUSERS)
At the time Apple incorporated xnu into their system (and presumably at the time xnu forked from FreeBSD) MAXUSERS
defaulted to 32. With 32 users, NPROC was 532. And so it was in OS X 10.0. 532 was the systemwide limit on the total number of processes. Individual users were limited to half that: 266.
And so it remained, until OS X 10.7 Lion, at which point Apple added a scale factor. (Actually, it had has a scale factor long before Lion for Server Performance Mode, but with Lion they extended it a bit to normal mode.) If your computer is running Lion or later and has 3 GiB or more of memory, then maxproc
is doubled, and maxprocperuid
, which is a fraction of maxproc
, goes up even more. maxproc
goes from 532 to 1064 and maxprocperuid
goes from 266 (maxproc / 2
) to 709 ((maxproc * 2) / 3
).
And so you get your 2010's (10.7 Lion through 10.14 Mojave) limits of 709 processes per user and 1064 processes systemwide.
In 10.10 Yosemite, NPROC
changed from (20 + 16 * MAXUSERS)
to (20 + 16 * 32)
presumably so they could get rid of MAXUSERS
while keeping NPROC
unchanged. In 10.15 Catlina, NPROC
was raised for the first time, but oddly they kept the "20 +" and made it (20 + 32 * 32)
So starting with Catalina maxproc
is a multiple of 1044.
Why a multiple, you ask? Starting with 10.13 High Sierra, maxproc
continues to scale up as you add more memory. It doubles if you have 3 GiB but less than 12 GiB of memory, and above that it multiplies by memory size divided by 4 GiB until you hit a limit a the scale factor of 16 when you have 64 GiB or more memory installed.
Now you know why maxproc
is a multiple of 532 or 1044 rather than the more common 512 or 500.
If you want to increase your limits further, you have to switch to server mode. You can read more about how to do that and why at Fix “fork: resource temporarily unavailable” on OS X. You can read a lot of details about what server mode really gives you at What does serverperfmode=1 actually do on macOS?