/dev/shm is full at 100% use, but has no large files
I recently updated to Ubuntu 20.04.3 (kernel 5.11.0-34-generic #36~20.04.1-Ubuntu SMP), so this may be a bug. After a few hours of use the shared memory partition fills up. According to df
, partition /dev/shm
has 16G of data in it:
Filesystem Size Used Avail Use% Mounted on
...
tmpfs 16G 16G 0 100% /dev/shm
...
Trying to write a new file to that partition fails:
$ echo "foobar" > /dev/shm/foobar.txt
bash: echo: write error: No space left on device
However, when I look at the files in that partition, the files only use around 170K:
$ du -h /dev/shm/*
0 /dev/shm/foobar.txt
4.0K /dev/shm/sem.CiscoAcMemoryLock
4.0K /dev/shm/sem.CiscoAcNamedEventNVM
4.0K /dev/shm/sem.CiscoAcNamedEventOpenDNS
4.0K /dev/shm/sem.CiscoAcNamedEventPostureISE
156K /dev/shm/tmp
I notice this happening because google-chrome dumps core, and I can't restart Chrome until there is space in /dev/shm
, and the only I've found to way to get the memory back is reboot.
How can I find out what is using space in /dev/shm?
Solution 1:
Files exist on a filesystem as long as they still have a directory entry or are being kept open by a current process. Running du -h /dev/shm/
(adding the *
excludes files beginning with .
) will only show the former.
You also need to run sudo lsof /dev/shm
, which shows the currently open files on that filesystem.
For example:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
QtWebEngi 654092 user DEL REG 0,31 2610 /dev/shm/.org.chromium.Chromium.eAzBpJ
QtWebEngi 654092 user DEL REG 0,31 2613 /dev/shm/.org.chromium.Chromium.eY7oKn
QtWebEngi 654092 user DEL REG 0,31 2624 /dev/shm/.org.chromium.Chromium.zuBEOF
QtWebEngi 654092 user 22u REG 0,31 144 2610 /dev/shm/.org.chromium.Chromium.eAzBpJ (deleted)
QtWebEngi 654092 user 29u REG 0,31 144 2613 /dev/shm/.org.chromium.Chromium.eY7oKn (deleted)
QtWebEngi 654092 user 46r REG 0,31 1048576 2624 /dev/shm/.org.chromium.Chromium.zuBEOF (deleted)
Lines that end with (deleted)
won't be found by du
, but will still take up space as long as any process is holding on to that file.