How can I keep my server from being clobbered by zombies from this Ubuntu cron job to remove PHP sessions?

It would probably be best to move the logic from find to a script that loops through all of the files on the commandline to see if they're being accessed, and if not, delete them:

#!/bin/bash

for x; do
  if ! /bin/fuser -s "$x" 2>/dev/null; then
    rm "$x"
  fi
done

Then change the cron job to just

09,39 *     * * *     root   [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -execdir thatscript.sh {} +

This will have find collect all the session files matching the max age, then run thatscript.sh with all of them at once (due to the + instead of ;). The script is then responsible for making sure the file is not in use and deleting it. This way, find should only have one direct child itself, and bash should not have any problem cleaning up the fuser and rm children.

From find's docs, it's not clear whether find will automatically divide up the list of filenames into multiple executions if they exceed shell/OS limits (and 13000 files may do so... older versions of bash had a default command line argument limit of somewhere around 5000) In that case, you may change -execdir thatscript.sh {} + to -print0 | xargs -0 thatscript.sh to have xargs divide up the files.

Alternatively, if you don't have the drive mounted noatime, change -cmin to -amin and ditch the tests entirely:

    09,39 *     * * *     root   [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -amin +$(/usr/lib/php5/maxlifetime) -delete

This will remove all the session files last accessed more than [output of the maxlifetime command] minutes ago. As long as you don't have any php processes that open a session then sit around for a long time (default for that maxlifetime on Debian seems to be 24 minutes which would be a very long time for a page to load) doing nothing, this shouldn't zap any sessions currently in use.


I have this problem also on ubuntu 11.10 and I solved this problem by editing:

/etc/cron.d/php5 

and replace the code with:

09,39 *     * * *     root   [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete

This is the ubuntu 11.04 cron job for php.


Fix the script so that it waits for its children or ignores SIG_CHILD. Can you put the script somewhere we can see it?

Update: It looks like you're triggering a bug in find!


I solved this for a client by moving the sessions from the file system to memcache. They didn't have the zombie processes, but still had zillions of sessions that the cronjob couldn't keep up on deleting. It took like 10 minutes to install memcache, reconfigure php.ini, test it out, and add some munin graphs to watch the memcache size. Presto - server load decreased, everyone happy.

http://www.dotdeb.org/2008/08/25/storing-your-php-sessions-using-memcached/ http://www.ducea.com/2009/06/02/php-sessions-in-memcached/