What to do with Released persistent volume?

KubernetesGoogle Cloud-PlatformGoogle Cloud-StorageGoogle Kubernetes-Engine

Kubernetes Problem Overview


TL;DR. I'm lost as to how to access the data after deleting a PVC, as well as why PV wouldn't go away after deleting a PVC.

Steps I'm taking:

  1. created a disk in GCE manually:

     gcloud compute disks create --size 5Gi disk-for-rabbitmq --zone europe-west1-b
    
  2. ran:

     kubectl apply -f /tmp/pv-and-pvc.yaml
    

    with the following config:

     # /tmp/pv-and-pvc.yaml
     apiVersion: v1
     kind: PersistentVolume
     metadata:
       name: pv-for-rabbitmq
     spec:
       accessModes:
       - ReadWriteOnce
       capacity:
         storage: 5Gi
       gcePersistentDisk:
         fsType: ext4
         pdName: disk-for-rabbitmq
       persistentVolumeReclaimPolicy: Delete
       storageClassName: standard
     ---
     apiVersion: v1
     kind: PersistentVolumeClaim
     metadata:
       name: pvc-for-rabbitmq
     spec:
       accessModes:
       - ReadWriteOnce
       resources:
         requests:
           storage: 5Gi
       storageClassName: standard
       volumeName: pv-for-rabbitmq
    
  3. deleted a PVC manually (on a high level: I'm simulating a disastrous scenario here, like accidental deletion or misconfiguration of a helm release):

     kubectl delete pvc pvc-for-rabbitmq
    

At this point I see the following:

$ kubectl get pv
NAME              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                      STORAGECLASS   REASON   AGE
pv-for-rabbitmq   5Gi        RWO            Delete           Released   staging/pvc-for-rabbitmq   standard                8m
$

> A side question, just improve my understanding: why PV is still there, even though it has a reclaim policy set to Delete? Isn't this what the docs say for the Delete reclaim policy?

Now if I try to re-create the PVC to regain access to the data in PV:

$ kubectl apply -f /tmp/pv-and-pvc.yaml
persistentvolume "pv-for-rabbitmq" configured
persistentvolumeclaim "pvc-for-rabbitmq" created
$

I still get this for pvs, e.g. a PV is stuck in Released state:

$
kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                             STORAGECLASS   REASON    AGE
pv-for-rabbitmq                            5Gi        RWO            Delete           Released   staging/pvc-for-rabbitmq          standard                 15m
$

...and I get this for pvcs:

$
kubectl get pvc
NAME               STATUS    VOLUME            CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-for-rabbitmq   Pending   pv-for-rabbitmq   0                         standard       1m
$

Looks like my PV is stuck in Released status, and PVC cannot access the PV which is not in Available status.

So, why the same PV and PVC cannot be friends again? How do I make a PVC to regain access to data in the existing PV?

Kubernetes Solutions


Solution 1 - Kubernetes

kubectl patch pv pv-for-rabbitmq -p '{"spec":{"claimRef": null}}'

This worked for me.

Solution 2 - Kubernetes

The phrase that says, Pods consume node resources and PVCs consume PV resources may be useful to fully understand the theory and friendship between PV and PVC.

I have attempted a full reproduction of the behavior noted using the provided YAML file and failed and it returned an expected result. Hence, before providing any further details, here is a walk-through of my reproduction.

Step 1: Created PD in Europe-west1 zone

sunny@dev-lab:~$ gcloud compute disks create --size 5Gi disk-for-rabbitmq --zone europe-west1-b

WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O 
performance. For more information, see: 

NAME               ZONE            SIZE_GB  TYPE         STATUS
disk-for-rabbitmq  europe-west1-b  5        pd-standard  READY

Step 2: Create a PV and PVC using the project YAML file

sunny@dev-lab:~$  kubectl apply -f pv-and-pvc.yaml

persistentvolume "pv-for-rabbitmq" created
persistentvolumeclaim "pvc-for-rabbitmq" created

Step 3: List all the available PVC

sunny@dev-lab:~$ kubectl get pvc
NAME               STATUS    VOLUME            CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-for-rabbitmq   Bound     pv-for-rabbitmq   5Gi        RWO            standard       16s

Step 4: List all the available PVs

sunny@dev-lab:~$ kubectl get pv
NAME              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                      STORAGECLASS   REASON    AGE
pv-for-rabbitmq   5Gi        RWO            Delete           Bound     default/pvc-for-rabbitmq   standard                 28s

Step 5: Delete the PVC and verify the result

sunny@dev-lab:~$  kubectl delete pvc pvc-for-rabbitmq
persistentvolumeclaim "pvc-for-rabbitmq" deleted

sunny@dev-lab:~$  kubectl get pv

> No resources found.

sunny@dev-lab:~$  kubectl get pvc

> No resources found.

sunny@dev-lab:~$  kubectl describe pvc-for-rabbitmq

> the server doesn't have a resource type "pvc-for-rabbitmq"

As per your question

> A side question, just improve my understanding: why PV is still there, even though it has a reclaim policy set to Delete? Isn't this what the docs say for the Delete reclaim policy?

You are absolutely correct, as per documentation when a user is done with their volume, they can delete the PVC objects from the API which allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with the volume after it has been released of its claim. In your YAML it was set to:

Reclaim Policy:  Delete

which means that it should have been deleted immediately. Currently, volumes can either be Retained, Recycled or Deleted.

Why wasn't it deleted? The only thing I could think of, would be maybe the PV was somehow still claimed, which is likely as a result of the PVC not successfully deleted as its capacity is showing "0" and to fix this you will need to delete the POD. Alternatively, you may use the $ kubectl describe pvc command to see why the PVC is still in a pending state.

> And for the question, How do I make a PVC to regain access to data in the existing PV?

This is not possible because of the status of reclaim policy i.e. Reclaim Policy: Delete to make this possible you would need to use the Retain option instead as per documentation

To validate the theory that you can delete PVC and keep the disk, do the following:

  • Change the reclaim policy to Retain
  • Delete the PVC
  • Delete the PV

And then verify if the disk was retained.

Solution 3 - Kubernetes

official documentation has answer, hopefully it helps other looking for same (https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes)

Retain The Retain reclaim policy allows for manual reclamation of the resource. When the PersistentVolumeClaim is deleted, the PersistentVolume still exists and the volume is considered “released”. But it is not yet available for another claim because the previous claimant’s data remains on the volume. An administrator can manually reclaim the volume with the following steps.

  1. Delete the PersistentVolume. The associated storage asset in external infrastructure (such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume) still exists after the PV is deleted.
  2. Manually clean up the data on the associated storage asset accordingly.
  3. Manually delete the associated storage asset, or if you want to reuse the same storage asset, create a new PersistentVolume with the storage asset definition.

Solution 4 - Kubernetes

I wrote a simple automatic PV releaser controller that would find and make Released PVs Available again for new PVCs, check it out here https://github.com/plumber-cd/kubernetes-dynamic-reclaimable-pvc-controllers.

But please make sure you read disclaimers and make sure that this is exactly what you want. Kubernetes doesn't do it automatically for a reason - workloads aren't supposed to have access to data from other workloads. For when they do - the idiomatic Kubernetes way to do it is StatefulSets, so Kubernetes guarantees that only the replicas of the same workload may claim the old data. My releaser certainly might be useful in some cases like CI/CD build cache (which it was created for) - but normally a PVC means "give me a clean ready to use storage I can save some data on", so at the very least - make it a separate StorageClass.

Solution 5 - Kubernetes

Like @Bharat Chhabra's answer but it will modify the status of all Released PersistentVolumes to Available:

kubectl get pv | tail -n+2 | awk '$5 == "Released" {print $1}' | xargs -I{} kubectl patch pv {} --type='merge' -p '{"spec":{"claimRef": null}}

Solution 6 - Kubernetes

The patches from the other answers worked for me only after deleting the Deployment.
After that, the Terminating resources got Released.


Delete all the resources listed by:

kubectl -n YOURNAMESPACE get all

Use kubectl -n YOURNAMESPACE <resource> <id> or (if you copy paste from the above output) kubectl -n YOURNAMESPACE <resource>/<id>, for each resource that you see listed there.

You can also do it at once kubectl -n YOURNAMESPACE <resource>/<id1> <resource>/<id2> <resource2>/<id3> <resource2>/<id4> <resource3>/<id5> etc..

Probably you tried to remove resources but they are getting recreated because of the deployment or replicaset resource, preventing the namespace from freeing up depending resources and from being cleaned up.

Solution 7 - Kubernetes

The answer from Bharat worked for me as well.

If your PV shows up as "Released" and you have already deleted the PVC via helm uninstall or another method, then you cannot re-use this PV again unless you remove the claim ref:

kubectl patch pv PV_NAME -p '{"spec":{"claimRef": null}}'

Keep in mind, you cannot do this unless while the PV is bound, you must first delete the PVC so the PV says "Released" and then you may run this command. The PV's status should then appear as "Available" and can be reused.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestiongmileView Question on Stackoverflow
Solution 1 - KubernetesBharat ChhabraView Answer on Stackoverflow
Solution 2 - KubernetesSunny J.View Answer on Stackoverflow
Solution 3 - KubernetesDeepika PandhiView Answer on Stackoverflow
Solution 4 - KubernetesDeeView Answer on Stackoverflow
Solution 5 - KubernetesZenul_AbidinView Answer on Stackoverflow
Solution 6 - KubernetesKamafeatherView Answer on Stackoverflow
Solution 7 - KubernetesdoublespacesView Answer on Stackoverflow