Why limit the number of open files and Running processes in Linux?
Solution 1:
This is mainly for historical reasons. On older Linux mainframes, many users would connect and use the mainframe's resources. Of course, itwas necessary to limit, and since such operations like file handles and processes were built into the kernel, it was limited there. It also helps limit attacks like the fork bomb. A defense against the form bomb using process limits is shown here.
It also helps keep complex services and daemons in check by not allowing runaway forking and file opening, similar to what the fork bomb tries to do.
Also to be noted are things like the amount of RAM and CPU available, and the fact that a 32-bit counter can only reference so much(only 4294967296 entries that can be counted by a 32-bit counter), but such limits are far above those usually set by programmers and system administrators. Anyway, long before you have 4294967296 processes, your machine would have probably been rebooted, either just as planned, or as it began to lock up due to starving another resource.
Unless you run the Titan with its 584 TiB of memory(you won't, since Linux cannot be run as one instance on a supercomputer), you probably won't reach this process limit anytime soon. Even then, the average process would only have roughly 146KB of memory, assuming no shared memory.
Solution 2:
There are limits to most everything, both in life and in Linux. Sometimes the limits are very high and not worth worrying about, and sometimes they are too low, set arbitrarily by a lazy programmer, or set by limits to the hardware.
Arbitrary limits come about when a programmer makes a decision such as the maximum number of characters allowed in a field, such as an address. It's a lot easier to set some arbitrary limit than to dynamically allocate memory as needed. You don't want to allocate all available memory to a simple input field, but you don't want to allow the input to overwrite other memory that may be important, either. For open files and processes, there may be an arbitrary limit, or it may be limited by available memory. If the latter, then bad things will start to happen as you approach the limit, and the system may hang, so it's often good to have a limit that comes into play before that happens. Sometimes, the arbitrary limits may be determined at startup depending on memory or disk space, and are usually fairly intelligent, but still arbitrary - someone decided where performance would become intolerable or dangerous, and set limits to avoid that situation.
Then there are limits set by hardware, such as the size of an integer or character. If you have a 32-bit system, then there are limits set by the maximum size of an integer (4,294,967,295 if unsigned, or half that if signed).
Solution 3:
For processes, /proc/sys/kernel/pid_max is the maximum pid allowed, so this is the hard limit of the number of processes that can run at any instant. However, limits on memory normally come into play well before that.
For the upper limit on open files on the entire machine, /proc/sys/fs/file-max is the hard limit; this is based on the amount of memory in the machine, so it will vary. The kernel sets this to be:
n = (mempages * (PAGE_SIZE / 1024)) / 10;
files_stat.max_files = max_t(unsigned long, n, NR_FILE);
which works out to be about 100 for each 1 MB of RAM.
Per process limits are different. ulimit will define these.