Kubernetes pod gets recreated when deleted

I have started pods with command

$ kubectl run busybox --image=busybox --restart=Never --tty -i --generator=run-pod/v1

Something went wrong, and now I can't delete this Pod.

I tried using the methods described below but the Pod keeps being recreated.

$ kubectl delete pods  busybox-na3tm
pod "busybox-na3tm" deleted
$ kubectl get pods
NAME                                     READY     STATUS              RESTARTS   AGE
busybox-vlzh3                            0/1       ContainerCreating   0          14s

$ kubectl delete pod busybox-vlzh3 --grace-period=0


$ kubectl delete pods --all
pod "busybox-131cq" deleted
pod "busybox-136x9" deleted
pod "busybox-13f8a" deleted
pod "busybox-13svg" deleted
pod "busybox-1465m" deleted
pod "busybox-14uz1" deleted
pod "busybox-15raj" deleted
pod "busybox-160to" deleted
pod "busybox-16191" deleted


$ kubectl get pods --all-namespaces
NAMESPACE   NAME            READY     STATUS              RESTARTS   AGE
default     busybox-c9rnx   0/1       RunContainerError   0          23s

You need to delete the deployment, which should in turn delete the pods and the replica sets https://github.com/kubernetes/kubernetes/issues/24137

To list all deployments:

kubectl get deployments --all-namespaces

Then to delete the deployment:

kubectl delete -n NAMESPACE deployment DEPLOYMENT

Where NAMESPACE is the namespace it's in, and DEPLOYMENT is the name of the deployment. If NAMESPACE is default, leave off the -n option altogether.

In some cases it could also be running due to a job or daemonset. Check the following and run their appropriate delete command.

kubectl get jobs

kubectl get daemonsets.app --all-namespaces

kubectl get daemonsets.extensions --all-namespaces

Instead of trying to figure out whether it is a deployment, deamonset, statefulset... or what (in my case it was a replication controller that kept spanning new pods :) In order to determine what it was that kept spanning up the image I got all the resources with this command:

kubectl get all

Of course you could also get all resources from all namespaces:

kubectl get all --all-namespaces

or define the namespace you would like to inspect:

kubectl get all -n NAMESPACE_NAME

Once I saw that the replication controller was responsible for my trouble I deleted it:

kubectl delete replicationcontroller/CONTROLLER_NAME

If your pod has name like name-xxx-yyy, it could be controlled by a replicasets.apps named name-xxx, you should delete that replicaset first before deleting the pod:

kubectl delete replicasets.apps name-xxx

Look out for stateful sets as well

kubectl get sts --all-namespaces

to delete all the stateful sets in a namespace

kubectl --namespace <yournamespace> delete sts --all

to delete them one by one

kubectl --namespace ag1 delete sts mssql1 
kubectl --namespace ag1 delete sts mssql2
kubectl --namespace ag1 delete sts mssql3

Many answers here tells to delete a specific k8s object, but you can delete multiple objects at once, instead of one by one:

kubectl delete deployments,jobs,services,pods --all -n <namespace>

In my case, I'm running OpenShift cluster with OLM - Operator Lifecycle Manager. OLM is the one who controls the deployment, so when I deleted the deployment, it was not sufficient to stop the pods from restarting.

Only when I deleted OLM and its subscription, the deployment, services and pods were gone.

First list all k8s objects in your namespace:

$ kubectl get all -n openshift-submariner

NAME                                       READY   STATUS    RESTARTS   AGE
pod/submariner-operator-847f545595-jwv27   1/1     Running   0          8d  
NAME                                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/submariner-operator-metrics   ClusterIP   101.34.190.249   <none>        8383/TCP   8d
NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/submariner-operator   1/1     1            1           8d
NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/submariner-operator-847f545595   1         1         1       8d

OLM is not listed with get all, so I search for it specifically:

$ kubectl get olm -n openshift-submariner

NAME                                                      AGE
operatorgroup.operators.coreos.com/openshift-submariner   8d
NAME                                                             DISPLAY      VERSION
clusterserviceversion.operators.coreos.com/submariner-operator   Submariner   0.0.1 

Now delete all objects, including OLMs, subscriptions, deployments, replica-sets, etc:

$ kubectl delete olm,svc,rs,rc,subs,deploy,jobs,pods --all -n openshift-submariner

operatorgroup.operators.coreos.com "openshift-submariner" deleted
clusterserviceversion.operators.coreos.com "submariner-operator" deleted
deployment.extensions "submariner-operator" deleted
subscription.operators.coreos.com "submariner" deleted
service "submariner-operator-metrics" deleted
replicaset.extensions "submariner-operator-847f545595" deleted
pod "submariner-operator-847f545595-jwv27" deleted

List objects again - all gone:

$ kubectl get all -n openshift-submariner
No resources found.

$ kubectl get olm -n openshift-submariner
No resources found.