Recommended disk size for GKE nodes?
When I create a new node pool in GKE, the size of the disks default to 100GB.
However, when I SSH into a node that's been up for a while, after running df -h
there's only 32GB in use. (I don't actually know where this 32GB comes from)
Do the nodes really need 100GB disk space? Can I run them with just 10GB for example? First I thought that the pods would use up space for the volumes, but on GKE the pods provision their own additional Persistent disks and don't add to the disk space of the node, so I'm confused why such a large volume is needed for the node itself?
Solution 1:
Do the nodes really need 100GB disk space? Can I run them with just 10GB for example?
It depends on what you are running in your cluster. If you aren't running many containers or if the containers aren't that big, you can probably get away with less than 100GB. I just checked one of our clusters running ~20 workloads and the three nodes were using 10-15Gb of space each.
The containers that run in your cluster need disk space regardless of persistent disks. When a container is started its image is put on disk and its file system mounted as an overlay filesystem (among other things probably). There is probably also storage used for caching images.
To increase or decrease the space used by an existing cluster one would create a new pool with nodes with the required configuration and cordon then delete the old ones.
Solution 2:
If you've built a standard GKE cluster, the whole disk space of 100GB should be partitioned. Please check the file /proc/partitions
at the cluster nodes:
The largest part /dev/sda1
should be mounted as a stateful partition:
You can change this by pressing "More options" in the "Create a Kubernetes cluster" dialogue:
If the cluster is created from command line, the disk size could be specified with the parameter
--disk-size of the command
gcloud container clusters create