“CrashLoopBackOff” and how to fix it
So in any larger container orchestrator installation, be it Kubernetes or OpenShift, you will encounter pods that crash regularly and enter the “CrashLoopBackOff” status.
$ oc get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
[..]
my-project-1 helloworld-11-9w3ud 1/1 Running 0 7h
my-project-2 myapp-simon-43-7macd 0/1 CrashLoopBackOff 3774 9h
Note the container that has status “CrashLoopBackOff” and 3774 restarts.
What does this mean?
So this means the pod is starting, then crashing, then starting again and crashing again. So you will see the symptoms above, namely that the pod STATUS is “CrashLoopBackOff” and there is a growing number of RESTARTS for the pod. This can happen, for example when:
- In most cases, there is an error as soon as the pod starts (in the initialization of the application for example)
- The application terminates without attaching a TTY, so Kubernetes thinks the application never ran
How can we fix this?
There is no simple solution for this, as this is specific for your container and whatever application you are running inside the container. But basically, you’ll have to find out why the docker container crashes. The easiest and first check should be if there are any errors in the output of the previous startup, e.g.:
$ oc project my-project-2
$ oc logs --previous myapp-simon-43-7macd
Also, check if you specified a valid “ENTRYPOINT” in your Dockerfile. As an alternative, also try launching your container on Docker and not on Kubernetes / OpenShift.