Jenkins CI - Cannot allocate memory

Solution 1:

Orien is correct, it is the fork() system call triggered by ProcessBuilder or Runtime.exec or other means of the JVM executing an external process (e.g. another JVM running ant, a git command, etc.).

There have been some posts on the Jenkins mailing lists about this: Cannot run program "git" ... error=12, Cannot allocate memory

There is a nice description of the issue on the SCons dev list: fork()+exec() vs posix_spawn()

There is a long standing JVM bug report with solutions: Use posix_spawn, not fork, on S10 to avoid swap exhaustion. But I'm not sure if this actually made it into JDK7 as the comments suggest was the plan.

In summary, on Unix-like systems, when one process (e.g. the JVM) needs to launch another process (e.g. git) a system call is made to fork() which effectively duplicates the current process and all its memory (Linux and others optimize this with copy-on-write so the memory isn't actually copied until the child attempts to write to it). The duplicate process then makes another system call, exec() to launch the other process (e.g. git) at which point all that copied memory from the parent process may be discarded by the operating system. If the parent process is using large amounts of memory (as JVM processes tend to do), the call to fork() may fail if the operating system determines it does not have enough memory+swap to hold two copies, even if the child process will never actually use that copied memory.

There are several solutions:

  • Add more physical memory/RAM to the machine.

  • Add more swap space to trick the fork() into working, even though the swap space is not strictly needed for anything. This is the solution I chose because it's fairly easy to add a swapfile, and I did not want to live with the potential for processes being killed due to overcommit.

  • On Linux, enable overcommit_memory option of the vm system (/proc/sys/vm/overcommit_memory). With overcommit, the call to fork() would always succeed, and since the child process isn't actually going to use that copy of the memory, all is well. Of course, it's possible that with overcommit, your processes will actually attempt to use more memory than is available and will be killed by the kernel. Whether this is appropriate depends on the other uses of the machine. Mission critical machines should probably not risk the out-of-memory killer running amok. But an internal development server that can afford some downtime would be a good place to enable overcommit.

  • Change the JVM to not use fork()+exec() but to use posix_spawn() when available. This is the solution requested in the JVM bug report above and mentioned on the SCons mailing list. It is also implemented in java_posix_spawn.

    I'm trying to find out if that fix made it into JDK7. If not, I wonder if the Jenkins people would be interested in a work around such as java_posix_spawn. There seem to have been attempts to integrate that into Apache commons-exec.

    Programmieraffe, I'm not 100% sure, but your link does suggest that the fix is in JDK7 and JDK6 1.6.0_23 and later. For the record, I was running OpenJDK 1.6.0_18.

See https://stackoverflow.com/questions/1124771/how-to-solve-java-io-ioexception-error-12-cannot-allocate-memory-calling-run

Solution 2:

Note the exception message: Cannot run program "/usr/bin/env": java.io.IOException: error=12, Cannot allocate memory" The Java process is trying to fork a new process to run the command /usr/bin/env but the operating system has run out of memory resources to create a new process. This is not the same as the Java VM running out of memory so no amount of fiddling with -Xmx flags will fix it. You'll need to monitor your memory resources while running your build. Increasing the swap space will likely fix your problem.