Pod keeps restarting and is in a CrashLoopBackOff state
One of our pods won't start and is constantly restarting and is in a CrashLoopBackOff state:
NAME READY STATUS RESTARTS AGE
quasar-api-staging-14c385ccaff2519688add0c2cb0144b2-3r7v4 0/1
CrashLoopBackOff 72 5h
Describing the pod looks this (just the events):
FirstSeen LastSeen Count From SubobjectPath Reason Message
57m 57m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 7515ced7f49c
57m 57m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 7515ced7f49c
52m 52m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 2efe8885ad49
52m 52m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 2efe8885ad49
46m 46m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id a4361ebc3c06
46m 46m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id a4361ebc3c06
41m 41m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 99bc3a8b01ad
41m 41m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 99bc3a8b01ad
36m 36m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 3e873c664cde
36m 36m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 3e873c664cde
31m 31m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 97680dac2e12
31m 31m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 97680dac2e12
26m 26m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 42ef4b0eea73
26m 26m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 42ef4b0eea73
21m 21m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 7dbd65668733
21m 21m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 7dbd65668733
15m 15m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id d372cb279fff
15m 15m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id d372cb279fff
10m 10m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id bc7f5a0fe5d4
10m 10m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id bc7f5a0fe5d4
5m 5m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id b545a71af1d2
5m 5m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id b545a71af1d2
3h 25s 43 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Pulled Container image "us.gcr.io/skywatch-app/quasar-api-staging:15.0" already present on machine
25s 25s 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 3e4087281881
25s 25s 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 3e4087281881
3h 5s 1143 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Backoff Back-off restarting failed docker container
The log for the pod doesn't show much either:
Pod "quasar-api-staging-14c385ccaff2519688add0c2cb0144b2-3r7v4" in namespace "default": container "quasar-api-staging" is in waiting state.
I've been able to run the pod locally, and it seems to work. I'm not sure what else to check or try. Any help or troubleshooting steps would be greatly appreciated!
You might try running kubectl logs <podid> --previous
to see the logs from the previous instance of the container.