What are some ways to prevent user cron jobs from crushing the servers?
You can use cron.allow
and cron.deny
to limit user access to cron, or you can use PAM limits to limit CPU usage, number of processes and stuff like that. Aside from that, the solution is to create something to monitor and deal with cron
jobs by users, because cron don't really has a limit on how many jobs to run.
I think CPanel has something on number of cron jobs running at the same time, but it's a specific tool (not sure).
I think you have one of those problems:
-
not enough memory to run the crontabs at the same time. You can fix by:
- adding more RAM
- limiting the maximum memory that a user can allocate
- reschedule the jobs to lower the number of concurrent jobs - you might need to replace crond and use a different scheduler
-
high I/O. You can fix by:
- lowering the I/O priority with
ionice
- reschedule the jobs to lower the number of concurrent jobs
- lowering the I/O priority with
Try to find out if the machine is swapping, and if it is not swapping during the night, then change the cron I/O priority class to idle:
sudo ionice -c 3 -p $(pgrep cron)
I have always scheduled cronjobs at random times (especially minutes). I commonly see cron examples that run at midnight like:
0 0 * * * /usr/bin/echo "Job ran"
If you have a lot of jobs defined like that you are asking for trouble. Unfortunately these are often long running system jobs. I also tend to schedule jobs at different hours throughout a batch process window. (23 to 05) hours.
I like the new cron specification used on Ubuntu. This has several /etc/cron.*
directories to specify jobs to run. They get run in sequence rather than parallel limiting the load.
You should be able to see what is scheduled in the files located in /etc/spool/cron/crontabs
. Reading these files will require root access. If it is users who are causing the problems discuss the problem with them.
You could also check /var/log/syslog
for CRON entries to see what is being run when.