Automatically kill a process if it exceeds a given amount of RAM
I work on large-scale datasets. When testing new software, a script will sometimes sneak up on me, quickly grab all available RAM, and render my desktop unusable. I'd like a way to set a RAM limit for a process so that if it exceeds that amount, it will be killed automatically. A language-specific solution probably won't work, as I use all sorts of different tools (R, Perl, Python, Bash, etc).
So is there some sort of process-monitor that will let me set a threshold amount of RAM and automatically kill a process if it uses more?
I would strongly advise not to do it. As suggested by @chrisamiller , setting ulimit will limit the RAM available with process.
But still if you are insisting then follow this procedure.
-
Save the following script as
killif.sh
:#!/bin/sh if [ $# -ne 2 ]; then echo "Invalid number of arguments" exit 0 fi while true; do SIZE=$(pmap $1|grep total|grep -o "[0-9]*") SIZE=${SIZE%%K*} SIZEMB=$((SIZE/1024)) echo "Process id =$1 Size = $SIZEMB MB" if [ $SIZEMB -gt $2 ]; then printf "SIZE has exceeded.\nKilling the process......" kill -9 "$1" echo "Killed the process" exit 0 else echo "SIZE has not yet exceeding" fi sleep 10 done
-
Now make it executable.
chmod +x killif.sh
-
Now run this script on terminal. Replace
PROCID
with actual process id andSIZE
with size in MB../killif.sh PROCID SIZE
For example:
./killif.sh 132451 100
If
SIZE
is 100 then process will be killed if its RAM usage goes up beyond 100 MB.
Caution: You know what are you trying to do. Killing process is not a good idea. If that process has any shutdown or stop command then edit the script and replace kill -9
command with that shutdown
command.
I hate to be the guy who answers his own question, but this morning I found an alternative method, wrapped into a nice little utility. It'll limit CPU time or memory consumption:
https://github.com/pshved/timeout
I'm giving this one a shot first but upvotes to Amey Jah for the nice answer. I'll check it out if this one fails me.
Try the prlimit
tool, from the util-linux
package. It runs a program with resource limits. It uses the prlimit
system call to set up the limits, which are then enforced purely by the kernel.
You can configure 16 limits, including:
- maximum amount of CPU time in seconds
- maximum number of user processes
- maximum resident set size ("used memory")
- maximum size a process may lock into memory
- size of virtual memory
- maximum number of open files
- maximum number of file locks
- maximum number of pending signals
- maximum bytes in POSIX message queues
This was too big to fit a comment. I modified Amey's original script to include a pgrep
, so instead of having to manually enter the process id, you can just exclude by name. For example, ./killIf.sh chrome 4000
kills any chrome process that exceeds 4GB in memory usage.
#!/bin/sh
# $1 is process name
# $2 is memory limit in MB
if [ $# -ne 2 ];
then
echo "Invalid number of arguments"
exit 0
fi
while true;
do
pgrep "$1" | while read -r procId;
do
SIZE=$(pmap $procId | grep total | grep -o "[0-9]*")
SIZE=${SIZE%%K*}
SIZEMB=$((SIZE/1024))
echo "Process id = $procId Size = $SIZEMB MB"
if [ $SIZEMB -gt $2 ]; then
printf "SIZE has exceeded.\nKilling the process......"
kill -9 "$procId"
echo "Killed the process"
exit 0
else
echo "SIZE has not yet exceeding"
fi
done
sleep 1
done
Be careful to select a narrow grep string and a big enough memory limit to not unnecessarily kill unintended processes.