Why is docker image eating up my disk space that is not used by docker
I have setup docker and I have used completely different block device to store docker's system data:
[root@blink1 /]# cat /etc/sysconfig/docker
# /etc/sysconfig/docker
other_args="-H tcp://0.0.0.0:9367 -H unix:///var/run/docker.sock -g /disk1/docker"
Note that /disk/1
is using a completely different hard drive /dev/xvdi
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 5.1G 2.6G 67% /
devtmpfs 1.9G 108K 1.9G 1% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
/dev/xvdi 20G 5.3G 15G 27% /disk1
/dev/dm-1 9.8G 1.7G 7.6G 18% /disk1/docker/devicemapper/mnt/bb6c540bae25aaf01aedf56ff61ffed8c6ae41aa9bd06122d440c6053e3486bf
/dev/dm-2 9.8G 1.7G 7.7G 18% /disk1/docker/devicemapper/mnt/c85f756c59a5e1d260c3cdb473f3f4d9e55ac568967abe190eeaf9c4087afeac
The problem is that when I continue download docker images and run docker containers, it seems that the other hard drive /dev/xvda1
is also used up.
I can verify this problem by remove some docker images. After I removed some docker images, /dev/xvda1
has some more extra space now.
Am I missing something?
My docker version:
[root@blink1 /]# docker info
Containers: 2
Images: 42
Storage Driver: devicemapper
Pool Name: docker-202:1-275421-pool
Pool Blocksize: 64 Kb
Data file: /disk1/docker/devicemapper/devicemapper/data
Metadata file: /disk1/docker/devicemapper/devicemapper/metadata
Data Space Used: 3054.4 Mb
Data Space Total: 102400.0 Mb
Metadata Space Used: 4.7 Mb
Metadata Space Total: 2048.0 Mb
Execution Driver: native-0.2
Kernel Version: 3.14.20-20.44.amzn1.x86_64
Operating System: Amazon Linux AMI 2014.09
Solution 1:
Deleting my entire /var/lib/docker is not ok for me. These are a safer ways:
Solution 1:
The following commands from the issue clear up space for me and it's a lot safer than deleting /var/lib/docker or for Windows check your disk image location here.
Before:
docker info
Example output:
Metadata file:
Data Space Used: 53.38 GB
Data Space Total: 53.39 GB
Data Space Available: 8.389 MB
Metadata Space Used: 6.234 MB
Metadata Space Total: 54.53 MB
Metadata Space Available: 48.29 MB
Command in newer versions of Docker e.g. 17.x +
docker system prune -a
It will show you a warning that it will remove all the stopped containers,networks, images and build cache. Generally it's safe to remove this. (Next time you run a container it may pull from the Docker registry)
Example output:
Total reclaimed space: 1.243GB
You can then run docker info again to see what has been cleaned up
docker info
Solution 2:
Along with this, make sure your programs inside the docker container are not writing many/huge files to the file system.
Check your running docker process's space usage size
docker ps -s #may take minutes to return
or for all containers, even exited
docker ps -as #may take minutes to return
You can then delete the offending container/s
docker rm <CONTAINER ID>
Find the possible culprit which may be using gigs of space
docker exec -it <CONTAINER ID> "/bin/sh"
du -h
In my case the program was writing gigs of temp files.
(Nathaniel Waisbrot mentioned in the accepted answer this issue and I got some info from the issue)
OR
Commands in older versions of Docker e.g. 1.13.x (run as root not sudo):
# Delete 'exited' containers
docker rm -v $(docker ps -a -q -f status=exited)
# Delete 'dangling' images (If there are no images you will get a docker: "rmi" requires a minimum of 1 argument)
docker rmi $(docker images -f "dangling=true" -q)
# Delete 'dangling' volumes (If there are no images you will get a docker: "volume rm" requires a minimum of 1 argument)
docker volume rm $(docker volume ls -qf dangling=true)
After :
> docker info
Metadata file:
Data Space Used: 1.43 GB
Data Space Total: 53.39 GB
Data Space Available: 51.96 GB
Metadata Space Used: 577.5 kB
Metadata Space Total: 54.53 MB
Metadata Space Available: 53.95 MB
Solution 2:
It's a kernel problem with devicemapper, which affects the RedHat family of OS (RedHat, Fedora, CentOS, and Amazon Linux). Deleted containers don't free up mapped disk space. This means that on the affected OSs you'll slowly run out of space as you start and restart containers.
The Docker project is aware of this, and the kernel is supposedly fixed in upstream (https://github.com/docker/docker/issues/3182).
A work-around of sorts is to give Docker its own volume to write to ("When Docker eats up you disk space"). This doesn't actually stop it from eating space, just from taking down other parts of your system after it does.
My solution was to uninstall docker, then delete all its files, then reinstall:
sudo yum remove docker
sudo rm -rf /var/lib/docker
sudo yum install docker
This got my space back, but it's not much different than just launching a replacement instance. I have not found a nicer solution.
Solution 3:
Move the /var/lib/docker
directory.
Assuming the /data
directory has enough room, if not, substitute for one that does,
sudo systemctl stop docker
sudo mv /var/lib/docker /data
sudo ln -s /data/docker /var/lib/docker
sudo systemctl start docker
This way, you don't have to reconfigure docker.