Namespace "stuck" as Terminating, How I removed it

KubernetesNamespaces

Kubernetes Problem Overview


I've had a "stuck" namespace that I deleted showing in this eternal "terminating" status.

Kubernetes Solutions


Solution 1 - Kubernetes

Assuming you've already tried to force-delete resources like: https://stackoverflow.com/q/35453792, and your at your wits' end trying to recover the namespace...

You can force-delete the namespace (perhaps leaving dangling resources):

(
NAMESPACE=your-rogue-namespace
kubectl proxy &
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize
)
  • This is a refinement of the answer here, which is based on the comment here.

  • I'm using the jq utility to programmatically delete elements in the finalizers section. You could do that manually instead.

  • kubectl proxy creates the listener at 127.0.0.1:8001 by default. If you know the hostname/IP of your cluster master, you may be able to use that instead.

  • The funny thing is that this approach seems to work even when using kubectl edit making the same change has no effect.

Solution 2 - Kubernetes

This is caused by resources still existing in the namespace that the namespace controller is unable to remove.

This command (with kubectl 1.11+) will show you what resources remain in the namespace:

kubectl api-resources --verbs=list --namespaced -o name \
  | xargs -n 1 kubectl get --show-kind --ignore-not-found -n <namespace>

Once you find those and resolve and remove them, the namespace will be cleaned up

Solution 3 - Kubernetes

As mentioned before in this thread there is another way to terminate a namespace using API not exposed by kubectl by using a modern version of kubectl where kubectl replace --raw is available (not sure from which version). This way you will not have to spawn a kubectl proxy process and avoid dependency with curl (that in some environment like busybox is not available). In the hope that this will help someone else I left this here:

kubectl get namespace "stucked-namespace" -o json \
  | tr -d "\n" | sed "s/\"finalizers\": \[[^]]\+\]/\"finalizers\": []/" \
  | kubectl replace --raw /api/v1/namespaces/stucked-namespace/finalize -f -

Solution 4 - Kubernetes

Need to remove the finalizer for kubernetes.

Step 1:

kubectl get namespace <YOUR_NAMESPACE> -o json > <YOUR_NAMESPACE>.json
  • remove kubernetes from finalizers array which is under spec

Step 2:

kubectl replace --raw "/api/v1/namespaces/<YOUR_NAMESPACE>/finalize" -f ./<YOUR_NAMESPACE>.json

Step 3:

kubectl get namespace

You can see that the annoying namespace is gone.

Solution 5 - Kubernetes

Simple trick

You can edit namespace on console only kubectl edit <namespace name> remove/delete "Kubernetes" from inside the finalizer section and press enter or save/apply changes.

in one step also you can do it.

Trick : 1

  1. kubectl get namespace annoying-namespace-to-delete -o json > tmp.json

  2. then edit tmp.json and remove"kubernetes"

  3. Open another terminal and Run kubectl proxy

curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json https://localhost:8001/api/v1/namespaces/**`<NAMESPACE NAME TO DELETE>`**/finalize

and it should delete your namespace.

Trick : 2

Check the kubectl cluster-info

1. kubectl cluster-info

> Kubernetes master is running at > https://localhost:6443

> KubeDNS is running at > https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use

2. kubectl cluster-info dump

now start the proxy using command :

3. kubectl proxy

> kubectl proxy & Starting to serve on > 127.0.0.1:8001

> find namespace

4. `kubectl get ns`

> {Your namespace name} Terminating 1d

put it in file

5. kubectl get namespace {Your namespace name} -o json > tmp.json

edit the file tmp.json and remove the finalizers

> }, "spec": { "finalizers": [ "kubernetes" ] },

after editing it should look like this

> }, "spec": { "finalizers": [ ] },

we almost there simply now run the command

curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/{Your namespace name}/finalize

and it's gone

**

Solution 6 - Kubernetes

I loved this answer extracted from here It is just 2 commands.

In one terminal:

kubectl proxy

In another terminal:

kubectl get ns delete-me -o json | \
  jq '.spec.finalizers=[]' | \
  curl -X PUT http://localhost:8001/api/v1/namespaces/delete-me/finalize -H "Content-Type: application/json" --data @-

Solution 7 - Kubernetes

Solution:

Use command below without any changes. it works like a charm.

NS=`kubectl get ns |grep Terminating | awk 'NR==1 {print $1}'` && kubectl get namespace "$NS" -o json   | tr -d "\n" | sed "s/\"finalizers\": \[[^]]\+\]/\"finalizers\": []/"   | kubectl replace --raw /api/v1/namespaces/$NS/finalize -f -

Enjoy

Solution 8 - Kubernetes

For us it was the metrics-server crashing.

So to check if this is relevant to you'r case with the following run: kubectl api-resources

If you get

error: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request

Then its probably the same issue

Credits goes to @javierprovecho here

Solution 9 - Kubernetes

Forcefully deleting the namespace or removing finalizers is definitely not the way to go since it could leave resources registered to a non existing namespace.

This is often fine but then one day you won't be able to create a resource because it is still dangling somewhere.

The upcoming Kubernetes version 1.16 should give more insights into namespaces finalizers, for now I would rely on identification strategies. A cool script which tries to automate these is: https://github.com/thyarles/knsk

However it works across all namespaces and it could be dangerous. The solution it s based on is: https://github.com/kubernetes/kubernetes/issues/60807#issuecomment-524772920

tl;dr

  1. Checking if any apiservice is unavailable and hence doesn't serve its resources: kubectl get apiservice|grep False
  2. Finding all resources that still exist via kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get -n $your-ns-to-delete

(credit: https://github.com/kubernetes/kubernetes/issues/60807#issuecomment-524772920)

Solution 10 - Kubernetes

I've written a one-liner Python3 script based on the common answers here. This script removes the finalizers in the problematic namespace.

python3 -c "namespace='<my-namespace>';import atexit,subprocess,json,requests,sys;proxy_process = subprocess.Popen(['kubectl', 'proxy']);atexit.register(proxy_process.kill);p = subprocess.Popen(['kubectl', 'get', 'namespace', namespace, '-o', 'json'], stdout=subprocess.PIPE);p.wait();data = json.load(p.stdout);data['spec']['finalizers'] = [];requests.put('http://127.0.0.1:8001/api/v1/namespaces/{}/finalize'.format(namespace), json=data).raise_for_status()"

>  rename namespace='<my-namespace>' with your namespace. > e.g. namespace='trust'

demo


Full script: https://gist.github.com/jossef/a563f8651ec52ad03a243dec539b333d

Solution 11 - Kubernetes

I write simple script to delete your stucking namespace based on @Shreyangi Saxena 's solution.

cat > delete_stuck_ns.sh << "EOF"
#!/usr/bin/env bash

function delete_namespace () {
    echo "Deleting namespace $1"
    kubectl get namespace $1 -o json > tmp.json
    sed -i 's/"kubernetes"//g' tmp.json
    kubectl replace --raw "/api/v1/namespaces/$1/finalize" -f ./tmp.json
    rm ./tmp.json
}

TERMINATING_NS=$(kubectl get ns | awk '$2=="Terminating" {print $1}')

for ns in $TERMINATING_NS
do
    delete_namespace $ns
done
EOF

chmod +x delete_stuck_ns.sh

This Script can detect all namespaces in Terminating state, and delete it.


PS:

  • This may not work in MacOS, cause the native sed in macos is not compatible with GNU sed.

    you may need install GNU sed in your MacOS, refer to this answer.

  • Please confirm that you can access your kubernetes cluster through command kubectl.

  • Has been tested on kubernetes version v1.15.3


Update

I found a easier solution:

kubectl patch RESOURCE NAME -p '{"metadata":{"finalizers":[]}}' --type=merge

Solution 12 - Kubernetes

Run kubectl get apiservice

For the above command you will find an apiservice with Available Flag=Flase.

So, just delete that apiservice using kubectl delete apiservice <apiservice name>

After doing this, the namespace with terminating status will disappear.

Solution 13 - Kubernetes

Please try with below command:

kubectl patch ns <your_namespace> -p '{"metadata":{"finalizers":null}}'

Solution 14 - Kubernetes

My case the problem was caused by a custom metrics.

To know what is causing pains just run this command:

kubectl api-resources | grep -i false

That should give you which api resources cause the problem, once identified just delete it

kubectl delete apiservice v1beta1.custom.metrics.k8s.io

Once deleted the namespace should disappear

Solution 15 - Kubernetes

  1. Run the following command to view the namespaces that are stuck in the Terminating state:

    kubectl get namespaces

  2. Select a terminating namespace and view the contents of the namespace to find out the finalizer. Run the following command:

    kubectl get namespace -o yaml

  3. Your YAML contents might resemble the following output:

        apiVersion: v1
        kind: Namespace
        metadata:
           creationTimestamp: 2019-12-25T17:38:32Z
           deletionTimestamp: 2019-12-25T17:51:34Z
           name: <terminating-namespace>
           resourceVersion: "4779875"
           selfLink: /api/v1/namespaces/<terminating-namespace>
           uid: ******-****-****-****-fa1dfgerz5
         spec:
           finalizers:
           - kubernetes
         status:
           phase: Terminating

  1. Run the following command to create a temporary JSON file:

    kubectl get namespace -o json >tmp.json

  2. Edit your tmp.json file. Remove the kubernetes value from the finalizers field and save the file. Output would be like:

    {
        "apiVersion": "v1",
        "kind": "Namespace",
        "metadata": {
            "creationTimestamp": "2018-11-19T18:48:30Z",
            "deletionTimestamp": "2018-11-19T18:59:36Z",
            "name": "<terminating-namespace>",
            "resourceVersion": "1385077",
            "selfLink": "/api/v1/namespaces/<terminating-namespace>",
            "uid": "b50c9ea4-ec2b-11e8-a0be-fa163eeb47a5"
        },
        "spec": {
        },

        "status": {
            "phase": "Terminating"
        }
    }

  1. To set a temporary proxy IP and port, run the following command. Be sure to keep your terminal window open until you delete the stuck namespace:

    kubectl proxy

  2. Your proxy IP and port might resemble the following output:

    Starting to serve on 127.0.0.1:8001

  3. From a new terminal window, make an API call with your temporary proxy IP and port:

  curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/your_terminating_namespace/finalize

Your output would be like:

    {
       "kind": "Namespace",
       "apiVersion": "v1",
       "metadata": {
         "name": "<terminating-namespace>",
         "selfLink": "/api/v1/namespaces/<terminating-namespace>/finalize",
         "uid": "b50c9ea4-ec2b-11e8-a0be-fa163eeb47a5",
         "resourceVersion": "1602981",
         "creationTimestamp": "2018-11-19T18:48:30Z",
         "deletionTimestamp": "2018-11-19T18:59:36Z"
       },
       "spec": {

       },
       "status": {
         "phase": "Terminating"
       }
    }

  1. The finalizer parameter is removed. Now verify that the terminating namespace is removed, run the following command:

    **kubectl get namespaces**
    

Solution 16 - Kubernetes

Replace ambassador with your namespace

Check if the namespace is stuck

kubectl get ns ambassador

NAME         STATUS        AGE
ambassador   Terminating   110d

This is stuck from a long time

Open a admin terminal/cmd prompt or powershell and run

> kubectl proxy

This will start a local web server: Starting to serve on 127.0.0.1:8001

Open another terminal and run

kubectl get ns ambassador -o json >tmp.json

edit the tmp.json using vi or nano

from this

{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
    "annotations": {
        "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"ambassador\"}}\n"
    },
    "creationTimestamp": "2021-01-07T18:23:28Z",
    "deletionTimestamp": "2021-04-28T06:43:41Z",
    "name": "ambassador",
    "resourceVersion": "14572382",
    "selfLink": "/api/v1/namespaces/ambassador",
    "uid": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
},
"spec": {
    "finalizers": [
        "kubernetes"
    ]
},
"status": {
    "conditions": [
        {
            "lastTransitionTime": "2021-04-28T06:43:46Z",
            "message": "Discovery failed for some groups, 3 failing: unable to retrieve the complete list of server APIs: compose.docker.com/v1alpha3: an error on the server (\"Internal Server Error: \\\"/apis/compose.docker.com/v1alpha3?timeout=32s\\\": Post https://0.0.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews: write tcp 0.0.0.0:53284-\u0026gt;0.0.0.0:443: write: broken pipe\") has prevented the request from succeeding, compose.docker.com/v1beta1: an error on the server (\"Internal Server Error: \\\"/apis/compose.docker.com/v1beta1?timeout=32s\\\": Post https://10.96.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews: write tcp 0.0.0.0:5284-\u0026gt;10.96.0.1:443: write: broken pipe\") has prevented the request from succeeding, compose.docker.com/v1beta2: an error on the server (\"Internal Server Error: \\\"/apis/compose.docker.com/v1beta2?timeout=32s\\\": Post https://0.0.0.0:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews: write tcp 1.1.1.1:2284-\u0026gt;0.0.0.0:443: write: broken pipe\") has prevented the request from succeeding",
            "reason": "DiscoveryFailed",
            "status": "True",
            "type": "NamespaceDeletionDiscoveryFailure"
        },
        {
            "lastTransitionTime": "2021-04-28T06:43:49Z",
            "message": "All legacy kube types successfully parsed",
            "reason": "ParsedGroupVersions",
            "status": "False",
            "type": "NamespaceDeletionGroupVersionParsingFailure"
        },
        {
            "lastTransitionTime": "2021-04-28T06:43:49Z",
            "message": "All content successfully deleted",
            "reason": "ContentDeleted",
            "status": "False",
            "type": "NamespaceDeletionContentFailure"
        }
    ],
    "phase": "Terminating"
}

}

to

    {
  "apiVersion": "v1",
  "kind": "Namespace",
  "metadata": {
    "annotations": {
      "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"ambassador\"}}\n"
    },
    "creationTimestamp": "2021-01-07T18:23:28Z",
    "deletionTimestamp": "2021-04-28T06:43:41Z",
    "name": "ambassador",
    "resourceVersion": "14572382",
    "selfLink": "/api/v1/namespaces/ambassador",
    "uid": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
  },
  "spec": {
    "finalizers": []
  }
}

by deleting status and kubernetes inside finalizers

Now use the command and replace ambassador with your namespace

curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/ambassador/finalize

you will see another json like before then run

then run the command

 kubectl get ns ambassador
Error from server (NotFound): namespaces "ambassador" not found

If it still says terminating or any other error make sure you format your json in a proper way and try the steps again.

Solution 17 - Kubernetes

If the namespace stuck in Terminating while the resources in that namespace have been already deleted, you can patch the finalizers of the namespace before deleting it:

kubectl patch ns ns_to_be_deleted -p '{"metadata":{"finalizers":null}}';

then

kubectl delete ns ns_to_be_deleted;

Edit:

Please check @Antonio Gomez Alvarado's Answer first. The root cause could be the metrics server that mentioned in that answer.

Solution 18 - Kubernetes

The only way I found to remove a "terminating" namespace is by deleting the entry inside the "finalizers" section. I've tried to --force delete it and to --grace-period=0 none of them worked, however, this method did:

on a command line display the info from the namespace:

$ kubectl get namespace your-rogue-namespace -o yaml

This will give you yaml output, look for a line that looks similar to this:

deletionTimestamp: 2018-09-17T13:00:10Z
  finalizers:
  - Whatever content it might be here...
  labels:

Then simply edit the namespace configuration and delete the items inside that finalizers container.

$ kubectl edit namespace your-rogue-namespace

This will open an editor (in my case VI), went over the line I wanted to delete and deleted it, I pressed the D key twice to delete the whole line.

Save it, quit your editor, and like magic. The rogue-namespace should be gone.

And to confirm it just:

$ kubectl get namespace your-rogue-namespace -o yaml

Solution 19 - Kubernetes

Edit: It is not recommended to remove finalizers. Correct approach would be:

My usual workspace is a small k8s cluster which I frequently destroy and rebuild it back, and that's why removing finalizers method works for me.

Original answer: I usually run into same problem.

This is what I do

kubectl get ns your-namespace -o json > ns-without-finalizers.json

Edit ns-without-finalizers.json. replace all finalizers with empty array.

Run kubectl proxy ( usually run it on another terminal )

Then curl this command

curl -X PUT http://localhost:8001/api/v1/namespaces/your-namespace/finalize -H "Content-Type: application/json" --data @ns-without-finalizers.json

Solution 20 - Kubernetes

There are a couple of things you can run. But what this usually means, is that the automatic deletion of namespace was not able to finish, and there is a process running that has to be manually deleted. To find this you can do these things:

Get all prossesse attached to the name space. If this does not result in anything move on to next suggestions

$ kubectl get all -n your-namespace

Some namespaces have apiserivces attached to them and it can be troublesome to delete. This can for that matter be whatever resources you want. Then you delete that resource if it finds anything

$ kubectl get apiservice|grep False

But the main takeaway, is that there might be some things that is not completly removed. So you can see what you initially had in that namespace, and then see what things is spun up with your YAMLs to see the processes up. Or you can start to google why wont service X be properly removed, and you will find things.

Solution 21 - Kubernetes

Completing the already great answer by nobar. If you deployed your cluster with Rancher there is a caveat.

Rancher deployments change EVERY api call, prepending /k8s/clusters/c-XXXXX/ to the URLs.

The id of the cluster on rancher (c-XXXXX) is something you can easily get from the Rancher UI, as it will be there on the URL.

Get cluster id

So after you get that cluster id c-xxxx, just do as nobar says, just changing the api call including that rancher bit.

(
NAMESPACE=your-rogue-namespace
kubectl proxy &
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
curl -k -H "Content-Type: application/json" \
  -X PUT --data-binary @temp.json \
  127.0.0.1:8001/k8s/clusters/c-XXXXX/api/v1/namespaces/$NAMESPACE/finalize
)

Solution 22 - Kubernetes

Debugging a similar issue.

Two important things to consider:

1 ) Think twice before deleting finalizers from your namespace because there might be resources that you wouldn't want to automatically delete or at least understand what was deleted for troubleshooting.

2 ) Commands like kubectl api-resources --verbs=list might not give you resources that were created by external crds.


In my case:

I viewed my namespace real state (that was stuck on Terminating) with kubectl edit ns <ns-name> and under status -> conditions I saw that some external crds that I installed were failed to be deleted because they add a finalizers defined:

 - lastTransitionTime: "2021-06-14T11:14:47Z"
    message: 'Some content in the namespace has finalizers remaining: finalizer.stackinstall.crossplane.io
      in 1 resource instances, finalizer.stacks.crossplane.io in 1 resource instances'
    reason: SomeFinalizersRemain
    status: "True"
    type: NamespaceFinalizersRemaining

Solution 23 - Kubernetes

Something similar happened to me in my case it was pv & pvc , which I forcefully removed by setting finalizers to null. Check if you could do similar with ns

kubectl patch pvc <pvc-name> -p '{"metadata":{"finalizers":null}}'

For namespaces it'd be

kubectl patch ns <ns-name> -p '{"spec":{"finalizers":null}}'

Solution 24 - Kubernetes

I tried 3-5 options to remove ns, but only this one works for me.

This sh file will remove all namespaces with Terminating status

$ vi force-delete-namespaces.sh

$ chmod +x force-delete-namespaces.sh

$ ./force-delete-namespaces.sh

#!/usr/bin/env bash

set -e
set -o pipefail

kubectl proxy &
proxy_pid="$!"
trap 'kill "$proxy_pid"' EXIT

for ns in $(kubectl get namespace --field-selector=status.phase=Terminating --output=jsonpath="{.items[*].metadata.name}"); do
    echo "Removing finalizers from namespace '$ns'..."
    curl -H "Content-Type: application/json" -X PUT "127.0.0.1:8001/api/v1/namespaces/$ns/finalize" -d @- \
        < <(kubectl get namespace "$ns" --output=json | jq '.spec = { "finalizers": [] }')

    echo
    echo "Force-deleting namespace '$ns'..."
    kubectl delete namespace "$ns" --force --grace-period=0 --ignore-not-found=true
done

Solution 25 - Kubernetes

curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json 127.0.0.1:8001/k8s/clusters/c-mzplp/api/v1/namespaces/rook-ceph/finalize

This worked for me, the namespace is gone.

Detailed explanation can be found in the link https://github.com/rook/rook/blob/master/Documentation/ceph-teardown.md.

This happened when I interrupted kubernetes installation(Armory Minnaker). Then I proceeded to delete the namespace and reinstall it. I was stuck with pod in terminating status due to finalizers. I got the namespace into tmp.json, removed finalizers from tmp.json file and did the curl command. Once I get past this issue, I used scripts for uninstalling the cluster to remove the residues and did a reinstallation.

Solution 26 - Kubernetes

kubectl edit namespace ${stucked_namespace}

Then delete finalizers in vi mode and save.

It worked in my case.

Solution 27 - Kubernetes

The simplest and most easiest way of doing this is copying this bash script

#!/bin/bash

###############################################################################
# Copyright (c) 2018 Red Hat Inc
#
# See the NOTICE file(s) distributed with this work for additional
# information regarding copyright ownership.
#
# This program and the accompanying materials are made available under the
# terms of the Eclipse Public License 2.0 which is available at
# http://www.eclipse.org/legal/epl-2.0
#
# SPDX-License-Identifier: EPL-2.0
###############################################################################

set -eo pipefail

die() { echo "$*" 1>&2 ; exit 1; }

need() {
	which "$1" &>/dev/null || die "Binary '$1' is missing but required"
}

# checking pre-reqs

need "jq"
need "curl"
need "kubectl"

PROJECT="$1"
shift

test -n "$PROJECT" || die "Missing arguments: kill-ns <namespace>"

kubectl proxy &>/dev/null &
PROXY_PID=$!
killproxy () {
	kill $PROXY_PID
}
trap killproxy EXIT

sleep 1 # give the proxy a second

kubectl get namespace "$PROJECT" -o json | jq 'del(.spec.finalizers[] | select("kubernetes"))' | curl -s -k -H "Content-Type: application/json" -X PUT -o /dev/null --data-binary @- http://localhost:8001/api/v1/namespaces/$PROJECT/finalize && echo "Killed namespace: $PROJECT"

# proxy will get killed by the trap

Add the above code in the deletenamepsace.sh file.

And then execute it by providing namespace as parameter(linkerd is the namespace i wanted to delete here)

➜ kubectl get namespaces
linkerd           Terminating   11d

➜ sh deletenamepsace.sh linkerd
Killed namespace: linkerd

➜ kubectl get namespaces

The above tip has worked for me.

Honestly i think kubectl delete namespace mynamespace --grace-period=0 --force is not at all worth trying.

Special Thanks to Jens Reimann! I think this script should be incorporated in kubectl commands.

Solution 28 - Kubernetes

Delete all the resources listed by:

kubectl -n YOURNAMESPACE get all

Use kubectl -n YOURNAMESPACE <resource> <id> or (if you copy paste from the above output) kubectl -n YOURNAMESPACE <resource>/<id>, for each resource that you see listed there.

You can also do it at once kubectl -n YOURNAMESPACE <resource>/<id1> <resource>/<id2> <resource2>/<id3> <resource2>/<id4> <resource3>/<id5> etc..

Probably you tried to remove resources but they are getting recreated because of the deployment or replicaset resource, preventing the namespace from freeing up depending resources and from being cleaned up.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionximbalView Question on Stackoverflow
Solution 1 - KubernetesBrent BradburnView Answer on Stackoverflow
Solution 2 - KubernetesJordan LiggittView Answer on Stackoverflow
Solution 3 - KubernetesteoincontattoView Answer on Stackoverflow
Solution 4 - KubernetesShreyangi SaxenaView Answer on Stackoverflow
Solution 5 - KubernetesHarsh ManvarView Answer on Stackoverflow
Solution 6 - KubernetesdbustospView Answer on Stackoverflow
Solution 7 - KubernetesMohammad RavanbakhshView Answer on Stackoverflow
Solution 8 - KubernetesAntonio Gomez AlvaradoView Answer on Stackoverflow
Solution 9 - KubernetesLukeView Answer on Stackoverflow
Solution 10 - KubernetesJossef Harush KadouriView Answer on Stackoverflow
Solution 11 - Kubernetesalex liView Answer on Stackoverflow
Solution 12 - KubernetesSaurav MalaniView Answer on Stackoverflow
Solution 13 - KubernetesEla_muruganView Answer on Stackoverflow
Solution 14 - KubernetesChristian Altamirano AyalaView Answer on Stackoverflow
Solution 15 - KubernetesprinceView Answer on Stackoverflow
Solution 16 - KubernetesNaveen GopalakrishnaView Answer on Stackoverflow
Solution 17 - KubernetesimrissView Answer on Stackoverflow
Solution 18 - KubernetesximbalView Answer on Stackoverflow
Solution 19 - KubernetesKiranView Answer on Stackoverflow
Solution 20 - KubernetesvonGohrenView Answer on Stackoverflow
Solution 21 - KubernetessaulRView Answer on Stackoverflow
Solution 22 - KubernetesRtmYView Answer on Stackoverflow
Solution 23 - KubernetesAbhi GadrooView Answer on Stackoverflow
Solution 24 - Kubernetesаlex dykyіView Answer on Stackoverflow
Solution 25 - Kuberneteswind_surferView Answer on Stackoverflow
Solution 26 - KubernetesAnonymousXView Answer on Stackoverflow
Solution 27 - KubernetesrhozetView Answer on Stackoverflow
Solution 28 - KubernetesKamafeatherView Answer on Stackoverflow