How to increase disk size in a stateful set

Pieced this procedure together based on other comments on https://github.com/kubernetes/kubernetes/issues/68737. I tested this on kubernetes 1.14:

  1. kubectl edit pvc <name> for each PVC in the StatefulSet, to increase its capacity.
  2. kubectl delete sts --cascade=orphan <name> to delete the StatefulSet and leave its pods.
  3. kubectl apply -f <name> to recreate the StatefulSet.
  4. kubectl rollout restart sts <name> to restart the pods, one at a time. During restart, the pod's PVC will be resized.

If you want to monitor what's happening, run two more shell windows with these commands, before any of the commands above:

  • kubectl get pod -w
  • kubectl get pvc -w

To apply this principal to a helm chart, I was able to do the following, based on the input above as well as some guidance on this thread: https://github.com/kubernetes/kubernetes/issues/68737#issuecomment-469647348

The example below uses the following values:

  • StorageClass name: standard
  • StatefulSet name: rabbitmq-server
  • PersistentVolumeClaim (PVC) name: data-rabbitmq-server-0
  • helm release name: rabbitmq-server
  • helm chart name: stable/rabbitmq

These values can be found in your environment using the following commands:

  • PVC name and StorageClass name: kubectl get pvc
  • StatefulSet name: kubectl get sts
  • helm release name: helm list
  • helm chart name: You should know what helm chart you're trying to update :D

And here are the steps for updating the PV size in a helm chart's StatefulSet:

  1. kubectl edit storageClass standard and set/ensure allowVolumeExpansion: true (it already was in my case)
  2. kubectl delete sts --cascade=orphan rabbitmq-server
  3. kubectl edit pvc data-rabbitmq-server-0 and change spec size to 50Gi
  4. Change the size in my helm chart (rabbitmq-values.yaml) to 50Gi
  5. helm upgrade --recreate-pods --reuse-values -f rabbit-values.yaml rabbitmq-server stable/rabbitmq

NOTE: The last step uses --recreate-pods flag in order to force a restart of the pods, which triggers the actual filesystem resizing. It also causes downtime for these pods. If you want to try to do it without downtime, you can try removing that flag, and manually killing/restarting one pod at a time, or something.


Even tho Resizing Persistent Volumes using Kubernetes from Kubernetes 1.11, there seems to be some issues with it.

As discussed in GitHub: StatefulSet: support resize pvc storage in K8s v1.11 #68737

Due to this limitation, many database Operators for Kubernetes don't support PVC resizing. It is a critical issue because when your database becomes bigger than you expected - you have no choice and it is needed to backup DB and recreate new DB from the backup.

You should resize it by deleting the Statefulset, that would mean you will delete all Pods and it will cause downtime.

A work around was posted by DaveWHarvey

I got around this limitation on elasticsearch because of work I had to do to avoid EBS volumes being unable to be assigned because they are in the wrong availability zone, i.e. I had created a statefulset per AZ. If I want to change some storage characteristic, I create a new "AZ" using the same storage class, and then migrate all the data to pods in that new AZ, then destroy the old AZ.

Hope this helps you a bit.


Here is a complete script to resize STS volumes based on other answers. I did not have to --cascade=false when deleting the STS because it would have been scaled to 0 before this step.

  1. Make sure the StorageClass supports volume expansion, if necessary edit it:
kubectl get -o jsonpath='{.allowVolumeExpansion}' sc <SC-NAME>
# should return true, otherwise, patch it:
kubectl patch -p '{"allowVolumeExpansion": true}' sc <SC-NAME>
# then run the first command again
  1. Scale the StatefulSet down to 0 to allow for volume expansion
# we need the original replica count, so let's save it before scaling down
REPLICAS=`kubectl get -o jsonpath='{.spec.replicas}' sts/<STS-NAME>`
kubectl scale sts/<STS-NAME> --replicas 0
  1. Patch the PersistentVolumeClaim with the new size, this will immediately resize the PersistentVolume and its backing disk. You can verify by describing the PV and by checking the cloud vendor portal for the disk. However, describing the PVC will not reflect the new size until a pod is started after which it automatically resizes the filesystem:
NEW_SIZE=128Gi
for i in `seq 0 $[REPLICAS-1]`; do
  PVC=<PVC-NAME-PREFIX>-$i
  echo "Updating PVC $PVC"
  # Print the current size:
  kubectl get -o jsonpath='{.spec.resources.requests.storage} ' pvc/$PVC
  # Set the new size:
  kubectl patch -p '{"spec": {"resources": {"requests": {"storage": "'$NEW_SIZE'"}}}}' pvc/$PVC
  # Verify the PV:
  echo "Waiting for 10 seconds so that the PV picks up the change..."
  echo "If you still see the same size, do not worry, to see the new size just run this script again"
  sleep 10
  PV=`kubectl get -o jsonpath='{.spec.volumeName}' pvc/$PVC`
  kubectl get -o jsonpath='{.spec.capacity.storage} ' pv/$PV
  echo "Done"
done
  1. Delete the STS to allow for the new update with the new size.
kubectl delete sts <STS-NAME> 
  1. Recreate the STS (helm upgrade or kubectl apply) after editing the requested size which should match the new size applied on the PVCs. Once started, every pod in the replicas will trigger filesystem resize and it will immediately reflect on its PVC
for i in `seq 0 $[REPLICAS-1]`; do
  PVC=<PVC-NAME-PREFIX>-$i
  echo "Verifying the size of PVC $PVC"
  # Verify the current size:
  kubectl get -o jsonpath='{.status.capacity.storage} ' pvc/$PVC
done