Downside of unlimited core size? Where do core files go?

Unlimited core dumps are not advisable in most situations, but are technically fine. A core dump only has "all the memory" the current process has. So at most it can only be as big as your ram + swap. Hopefully you have more free space then that.

In real life they should be "small-ish" compared to total ram + swap.

The file "should" end up in "the current directory". For upstart tasks that don't chdir that's usually /. If they do change directory then your on your own to hunt them down. You can however hard code a path for them.

You should be able to should be able to check /proc/sys/kernel/core_pattern for the "pattern". If you set the pattern to something like echo "/var/log/core" > /proc/sys/kernel/core_pattern then all your cores should end up in /var/log


A core file is an image of a process that is created by the operating system when the process terminates unexpectedly. Core files get created when a program misbehaves due to a bug, or a violation of the CPU or memory protection mechanisms. The operating system kills the program and creates the core file.

This file can be very useful in determining what went wrong with a process. The production of core files can be enabled by default, depending on the distribution and version of Linux that you have.

If you don't want core files at all, set "ulimit -c 0" in your start-up files. That's the default on many systems; in /etc/profile you may find

Because truncated files are of no practical use, set the size of the Linux core file to "unlimited".

Usage of ulimit         Action
ulimit -c               # check the current corefile limit
ulimit -c 0             # turn off corefiles
ulimit -c x             # set the maximum corefile size to x number of 1024bytes
ulimit -c unlimited     # turn on corefiles with unlimited size
ulimit -n unlimited     # allows an unlimited number of open file descriptors
ulimit -p               # size of pipes
ulimit -s               # maximum native stack size for a process
ulimit -u               # number of user processes
help ulimit             #list of other options

The core file is placed into the current working directory of the process, subject to write permissions for the JVM process and free disk space.

Depending on the kernel level, a useful kernel option is available that gives corefiles more meaningful names. As root user, the option sysctl -w kernel.core_users_pid=1 ensures that core files have a name of the form "Core.PID".

ulimit -S -c 0 > /dev/null 2>&1

If you DO want core files, you need to reset that in your own .bash_profile:

ulimit -c 50000

would allow core files but limit them to 50,000 bytes.

You have more control of core files in /proc/sys/kernel/

For example, you can do eliminate the tagged on pid by

echo "0" > /proc/sys/kernel/core_uses_pid 

Core files will then just be named "core". People do things like that so that a user can choose to put a non-writable file named "core" in directories where they don't want to generate core dumps. That could be a directory (mkdir core) or a file (touch core;chmod 000 core).

But perhaps more interesting is that you can do:

mkdir /tmp/corefiles 
chmod 777 /tmp/corefiles 
echo "/tmp/corefiles/core" > /proc/sys/kernel/core_pattern 

All corefiles then get tossed to /tmp/corefiles (don't change core_uses_pid if you do this).

Test this with a simple script:

# script that dumps core 
kill -s SIGSEGV $$ 

Under Ubuntu, creation of core files is controlled via the file /etc/default/collectd. You can enable the creation of core dumps by setting:

ENABLE_COREFILES=1

Locating the core file

Once the daemon crashed, a file will be created in its current working directory. By default, this is pkglocalstatedir, i.e. prefix/var/lib/collectd. If you installed a package, this directory is most likely /var/lib/collectd.

Sources: AP Lawrence, and IBM