Back-off restarting failed container - Error syncing pod in Minikube

I'm facing this error when trying to create pods. It is occurring with even very common images like Ubuntu,Alpine also. I'm fairly new to Kubernetes and using a Minikube Node ( version v0.24.1 )

Command:

kubectl run ubuntu --image==ubuntu

Error :

Back-off restarting failed container - Error syncing pod

Versions:

  • Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"**v1.8.0**", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

  • Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"**v1.8.0**", GitCommit:"0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4", GitTreeState:"clean", BuildDate:"2017-11-29T22:43:34Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}

Describe Pod command :

Name:           ubuntunew-7567df64b8-mwc7x
Namespace:      default
Node:           minikube/192.168.99.102
Start Time:     Tue, 31 Jul 2018 14:48:35 +0530
Labels:         pod-template-hash=3123892064
                run=ubuntunew
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"ubuntunew-7567df64b8","uid":"b3ba5547-94a2-11e8-91ce-080027df8e9...
Status:         Running
IP:             172.17.0.4
Created By:     ReplicaSet/ubuntunew-7567df64b8
Controlled By:  ReplicaSet/ubuntunew-7567df64b8
Containers:
  ubuntunew:
    Container ID:   docker://7871bcbd8a42164fd1168ed1955b75583e16d779fb609d39ebbb2c871e855b3b
    Image:          ubuntu
    Image ID:       docker-pullable://ubuntu@sha256:3f119dc0737f57f704ebecac8a6d8477b0f6ca1ca0332c7ee1395ed2c6a82be7
    Port:           <none>
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 31 Jul 2018 15:02:53 +0530
      Finished:     Tue, 31 Jul 2018 15:02:53 +0530
    Ready:          False
    Restart Count:  7
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8nj4d (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          False 
  PodScheduled   True 
Volumes:
  default-token-8nj4d:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-8nj4d
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     <none>
Events:
  Type     Reason                 Age                 From               Message
  ----     ------                 ----                ----               -------
  Normal   Scheduled              15m                 default-scheduler  Successfully assigned ubuntunew-7567df64b8-mwc7x to minikube
  Normal   SuccessfulMountVolume  15m                 kubelet, minikube  MountVolume.SetUp succeeded for volume "default-token-8nj4d"
  Warning  BackOff                13m (x4 over 14m)   kubelet, minikube  Back-off restarting failed container
  Normal   Pulled                 13m (x4 over 14m)   kubelet, minikube  Successfully pulled image "ubuntu"
  Normal   Created                13m (x4 over 14m)   kubelet, minikube  Created container
  Normal   Started                13m (x4 over 14m)   kubelet, minikube  Started container
  Normal   Pulling                10m (x6 over 15m)   kubelet, minikube  pulling image "ubuntu"
  Warning  FailedSync             39s (x52 over 14m)  kubelet, minikube  Error syncing pod

Weirdly enough, it works for the nginx image.


As per Describe Pod command listing, your Container inside the Pod has been already completed with exit code 0, which states about successful completion without any errors/problems, but the life cycle for the Pod was very short. To keep Pod running continuously you must specify a task that will never finish.

apiVersion: v1
kind: Pod
metadata:
  name: ubuntu
spec:
  containers:
  - name: ubuntu
    image: ubuntu
    command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]

You can find more hints Here.