Is there a limit of stack size of a process in linux

Solution 1:

The stack is normally limited by a resource limit. You can see what the default settings are on your installation using ulimit -a:

stack size              (kbytes, -s) 8192

(this shows that mine is 8MB, which is huge).

If you remove or increase that limit, you still won't be able to use all the RAM in the machine for the stack - the stack grows downward from a point near the top of your process's address space, and at some point it will run into your code, heap or loaded libraries.

Solution 2:

The limit can be set by the admin.

See man ulimit.

There is probably a default which you cannot cross. If you have to worry about stack limits, I would say you need to rethink your design, perhaps write an iterative version?

Solution 3:

It largely depends what architecture you're on (32 or 64-bit) and whether you're multithreaded or not.

By default in a single threaded process, i.e. the main thread created by the OS at exec() time, your stack usually will grow until it hits something else in the address space. This means that it is generally possible, on a 32-bit machine, to have, say 1G of stack.

However, this is definitely NOT the case in a multithreaded 32-bit process. In multithreaded procesess, the stacks share address space and hence need to be allocated, so they typically get given a small amount of address space (e.g. 1M) so that many threads can be created without exhausting address space.

So in a multithreaded process, it's small and finite, in a single threaded one, it's basically until you hit something else in the address-space (which the default allocation mechanism tries to ensure doesn't happen too soon).

In a 64-bit machine, of course there is a lot more address space to play with.

In any case you can always run out of virtual memory, in which case you'll get a SIGBUS or SIGSEGV or something.