AWS ElasticBeanstalk docker-thin-pool getting full and causing re-mount of filesystem as read-only?
The .ebextensions
suggested by David Ellis worked for me. I'm unable to comment on his answer, but I wanted to add that you can create a new EBS volume instead of using a snapshot. To mount a 40GB EBS volume, I used the following:
option_settings:
- namespace: aws:autoscaling:launchconfiguration
option_name: BlockDeviceMappings
value: /dev/xvdcz=:40:true
See also this documentation, which has an example of mapping a new 100GB EBS volume to /dev/sdh
.
The true
at the end means "delete on terminate".
I created a new .ebextensions
directory containing an ebs.config
file with the above code, then zipped that directory together with my Dockerrun.aws.json
. Note that the Dockerrun file must be at the top level of the zip, not inside a subdirectory.
To find where Elastic Beanstalk is mounting the volume, use lsblk
on the failing instance. It was also /dev/xvdcz
for me, so maybe that is the standard.
We got hit by the same issue. The root cause seems to be Docker not mounting its storage engine (thin-provisioned devicemapper
by default in Elastic Beanstalk) with the discard
options, which in turn fills blocks until it breaks.
I wasn't able to find a definite solution to this, but here is a workaround (see this comment) that I was able to use on affected instances :
docker ps -qa | xargs docker inspect --format='{{ .State.Pid }}' | xargs -IZ fstrim /proc/Z/root/