How does Docker deal with OOM killer and memory limits?

Solution 1:

Oh! Looks I forgot to post the answer.

The problem above is with my java process, it's not related to docker. I mistakenly thought that OOM report prints RSS in Kbytes. That's wrong - OOM report prints the amount of pages, which are usually take 4K each.

In my case, pid 26675 takes 64577 pages for RSS which equals (64577 * 4K) 258'308 KBytes. Adding 2 bash processes gives us the limit of the current CGroup - 262144kB.

So, the further analysis must be in the JVM field: heap/metaspace analyses, native memory tracking, threads, etc...

Solution 2:

Not all Java memory is on the Heap. Before Java 8 there was the Permgen some of which has moved to Metaspace. You also have a stack (possibly 1Mb) for every thread, and the code for the JVM. It appears your container is undersized.

There are tunables for Permgen and stack sizing. Metaspace will grow as much as required. There are demonstration programs that will grow the Metaspace to huge sizes.

Study the memory model for Java before resizing your container. The JVM itself will fail with an Out of Memory condition if the allocated memory is too small. Threads will fail if the stack size is too small.