No space left on device error, but df reports as more space available
My PHP sessions on my Debian webserver using Apache2 with mod_php
seem to be failing randomly, saying that there’s no space to write them:
sudo tail -60 /var/log/apache2/error.log
[Fri Jan 30 15:55:35 2015] [error] [client xxx.xxx.xxx.xxx] PHP Warning: session_start() [<a href='function.session-start'>function.session-start</a>]: open(/tmp/sess_555555555555555555, O_RDWR) failed: No space left on device (28) in /path/to-first-session-use/core/bootstrap.php on line 18
When I try to:
ls /tmp
It just hangs forever, so that’s bad.
But when I check free space, and check that inode usage is reasonable...
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 150G 121G 22G 85% /
tmpfs 2.0G 0 2.0G 0% /lib/init/rw
udev 10M 16K 10M 1% /dev
tmpfs 2.0G 4.0K 2.0G 1% /dev/shm
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 19922944 11143605 8779339 56% /
tmpfs 513524 4 513520 1% /lib/init/rw
udev 513524 135 513389 1% /dev
tmpfs 513524 3 513521 1% /dev/shm
The numbers seem fine. Sure, 85% is more than I’d like, but it's not 99% or anything.
I was suspecting that it was a problem due to not rebooting the machine for 5 years and maybe the creation of a lot of small files but the inode info that I’m getting kinda contradicts this. Where should I investigate instead?
Edit:
ls -l /
drwxrwxrwt 4 root root 692M Feb 1 11:09 tmp/
drwxr-xr-x 10 root root 4.0K Jan 1 2013 usr/
drwxr-xr-x 14 root root 4.0K Oct 7 2010 var/
...etc
Solution 1:
It could be that the /tmp/
directory itself is filled with stale PHP sessions that are not getting cleaned up; meaning the source of the issues might be isolated to the /tmp/
directory itself. If that is the case I would just remove all /tmp/sess_*
files. First, list all of the sess_*
files like this:
ls -la /tmp/sess_*
Or you can get count with wc
like this:
ls -la /tmp/sess_* | wc -l
Now once you get some confirmation there is some insane number of files there, go ahead and run this command to delete the /tmp/sess_*
files:
sudo rm -rf /tmp/sess_*
And the ephemeral session files will be blown away.
But another brute force—but relatively safe—way to deal wit this is to blow away the /tmp
directory itself, recreate the /tmp
directory and reboot the server.
Since the /tmp
directory is basically a coding holding pen for cached material, there is nothing valid that should be in there. So my best advice is to run the following command to remove & rebuidl the /tmp
directory.
rm -rf /tmp && mkdir /tmp/ && chown root:root /tmp && chmod 1777 /tmp
Now that one liner is basically a list of shell commands connected by &&
that will first delete /tmp
, recreate /tmp
, change the ownership of /tmp
back to root:root
and then set proper permissions to the /tmp
directory. If you wish you can run each command one by one if you feel safer doing it that way.
sudo rm -rf /tmp
sudo mkdir /tmp
sudo chown root:root /tmp
sudo chmod 1777 /tmp
Once that is done, I would recommend rebooting the server. Things should be calm cleared up again.
Solution 2:
Sometimes damaged file system can do effects like it - for example when directory /tmp is damaged. Or - when there is to much files.
For "quick" fix:
mv /tmp /tmp.xxx
mkdir /tmp
chmod a+rwxt /tmp
If that help - try reboot system and fsck root file system. If it's ok - just remove /tmp.xxx directory.
Another possibility is - when /tmp is "other" partition or tmpfs (seen on linux vservers) - but it's not show by df (because df get list of partitions from /etc/mtab file which sometimes is not correct). Try check disk space directly on tmp by command:
df /tmp
df -i /tmp
Other option which usually helps with sessions - it using other session mechanism. If you have a lot of temporary sessions, which doesn't need to be very persistent - i would you recommend using memcache for session storing. Configuration it very simple - you must install php-memcache, memcached and then in php.conf configure:
session.save_handler = memcache
session.save_path="tcp://server:port?persistent=1&weight=1&timeout=1&retry_interval=15"
Then - sessions will be stored in memcache up to defined size. over it - oldest will be automaticly removed.
Solution 3:
For me, changing fs.inotify.max_user_watches did the trick.
root@grostruc:/# service ssh restart
Error: No space left on device
root@grostruc:/# sysctl fs.inotify.max_user_watches
fs.inotify.max_user_watches = 65536
root@grostruc:/# sysctl fs.inotify.max_user_watches=262144
fs.inotify.max_user_watches = 262144
root@grostruc:/# service ssh restart
Fix changed value in /etc/sysctl.conf