Is there any way to attach an existing volume (one I just restored from a snapshot) to a PVC?

I have a mongodb-replicaset created with the helm chart. The chart creates PVCs based on the StorageClass I provide. I annotate my volumes with a tag picked up by a cron job that snapshots the volumes.

In the event I need to restore from backup snapshots, say in another cluster, I know I can create a volume from the snapshot, but I don't know how to then turn that volume into the PVC that the StatefulSet is expecting and can restart from.


I learned that a StatefulSet will look for PVCs with specific names. I figured this out from the documentation at https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#limitations which says:

The storage for a given Pod must either be provisioned by a PersistentVolume Provisioner based on the requested storage class, or pre-provisioned by an admin.

And through experimentation I discovered that pre-provisioning just meant creating PersistentVolumeClaims with the expected names.

I was able to restore an EBS snapshot to a volume, create a PersistentVolume directly referencing the restored volume id, then create a PersistentVolumeClaim with the correct name. So for instance this mongo installation expects volumes named like datadir-pii-mongodb-replicaset-[0-2] and after restoring the EBS snapshot to a volume I use the following yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
  labels:
    failure-domain.beta.kubernetes.io/region: us-west-2
    failure-domain.beta.kubernetes.io/zone: us-west-2a
  name: pv-a
  namespace: larksettings-pii
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 320Gi
  awsElasticBlockStore:
    fsType: xfs
    volumeID: aws://us-west-2a/vol-xxxxxxxxxxxxx
  storageClassName: mongo-xfs

---

apiVersion: v1
kind: PersistentVolumeClaim

metadata:
  labels:
    app: mongodb-replicaset
    release: pii
  name: datadir-pii-mongodb-replicaset-0
  namespace: larksettings-pii
spec:
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 320Gi
    storageClassName: mongo-xfs
    volumeName: pv-a

Be careful about availability zones. Since I spanned 3 zones I needed to restore the three snapshots into separate zones and make sure the PersistentVolume spec reflected that.