How to predict the maximum call depth of a recursive method?

For the purposes of estimating the maximum call depth a recursive method may achieve with a given amount of memory, what is the (approximate) formula for calculating the memory used before a stack overflow error is likely to occur?

Edit:

Many have responded with "it depends", which is reasonable, so let's remove some of the variables by using a trivial but concrete example:

public static int sumOneToN(int n) {
    return n < 2 ? 1 : n + sumOneToN(n - 1);
}

It is easy to show that running this in my Eclipse IDE explodes for n just under 1000 (surprisingly low to me). Could this call depth limit have been estimated without executing it?

Edit: I can't help thinking that Eclipse has a fixed max call depth of 1000, because I got to 998, but there's one for the main, and one for the initial call to the method, making 1000 in all. This is "too round" a number IMHO to be a coincidence. I'll investigate further. I have just Dux overhead the -Xss vm parameter; it's the maximum stack size, so Eclipse runner must have -Xss1000 set somewhere


This is clearly JVM- and possibly also architecture-specific.

I've measured the following:

  static int i = 0;
  public static void rec0() {
      i++;
      rec0();
  }

  public static void main(String[] args) {
      ...
      try {
          i = 0; rec0();
      } catch (StackOverflowError e) {
          System.out.println(i);
      }
      ...
  }

using

Java(TM) SE Runtime Environment (build 1.7.0_09-b05)
Java HotSpot(TM) 64-Bit Server VM (build 23.5-b02, mixed mode)

running on x86.

With a 20MB Java stack (-Xss20m), the amortized cost fluctuated around 16-17 bytes per call. The lowest I've seen was 16.15 bytes/frame. I therefore conclude that the cost is 16 bytes and the rest is other (fixed) overhead.

A function that takes a single int has basically the same cost, 16 bytes/frame.

Interestingly, a function that takes ten ints requires 32 bytes/frame. I am not sure why the cost is so low.

The above results apply after the code's been JIT compiled. Prior to compilation the per-frame cost is much, much higher. I haven't yet figured out a way to estimate it reliably. However, this does mean that you have no hope of reliably predicting maximum recursion depth until you can reliably predict whether the recursive function has been JIT compiled.

All of this was tested with a ulimit stack sizes of 128K and 8MB. The results were the same in both cases.


Only a partial answer: from JVM Spec 7, 2.5.2, stack frames can be allocated on the heap, and the stack size may be dynamic. I couldn't say for certain, but it seems it should be possible to have your stack size bounded only by your heap size:

Because the Java virtual machine stack is never manipulated directly except to push and pop frames, frames may be heap allocated.

and

This specification permits Java virtual machine stacks either to be of a fixed size or to dynamically expand and contract as required by the computation. If the Java virtual machine stacks are of a fixed size, the size of each Java virtual machine stack may be chosen independently when that stack is created.

A Java virtual machine implementation may provide the programmer or the user control over the initial size of Java virtual machine stacks, as well as, in the case of dynamically expanding or contracting Java virtual machine stacks, control over the maximum and minimum sizes.

So it'll be up to the JVM implementation.