How can I debug "ImagePullBackOff"?

KubernetesOpenshiftOpenshift Origin

Kubernetes Problem Overview


All of a sudden, I cannot deploy some images which could be deployed before. I got the following pod status:

[root@webdev2 origin]# oc get pods 
NAME                      READY     STATUS             RESTARTS   AGE 
arix-3-yjq9w              0/1       ImagePullBackOff   0          10m 
docker-registry-2-vqstm   1/1       Running            0          2d 
router-1-kvjxq            1/1       Running            0          2d 

The application just won't start. The pod is not trying to run the container. From the Event page, I have got Back-off pulling image "172.30.84.25:5000/default/arix@sha256:d326. I have verified that I can pull the image with the tag with docker pull.

I have also checked the log of the last container. It was closed for some reason. I think the pod should at least try to restart it.

I have run out of ideas to debug the issues. What can I check more?

Kubernetes Solutions


Solution 1 - Kubernetes

You can use the 'describe pod' syntax

For OpenShift use:

oc describe pod <pod-id>  

For vanilla Kubernetes:

kubectl describe pod <pod-id>  

Examine the events of the output. In my case it shows Back-off pulling image unreachableserver/nginx:1.14.22222

In this case the image unreachableserver/nginx:1.14.22222 can not be pulled from the Internet because there is no Docker registry unreachableserver and the image nginx:1.14.22222 does not exist.

NB: If you do not see any events of interest and the pod has been in the 'ImagePullBackOff' status for a while (seems like more than 60 minutes), you need to delete the pod and look at the events from the new pod.

For OpenShift use:

oc delete pod <pod-id>
oc get pods
oc get pod <new-pod-id>

For vanilla Kubernetes:

kubectl delete pod <pod-id>  
kubectl get pods
kubectl get pod <new-pod-id>

Sample output:

  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  32s                default-scheduler  Successfully assigned rk/nginx-deployment-6c879b5f64-2xrmt to aks-agentpool-x
  Normal   Pulling    17s (x2 over 30s)  kubelet            Pulling image "unreachableserver/nginx:1.14.22222"
  Warning  Failed     16s (x2 over 29s)  kubelet            Failed to pull image "unreachableserver/nginx:1.14.22222": rpc error: code = Unknown desc = Error response from daemon: pull access denied for unreachableserver/nginx, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
  Warning  Failed     16s (x2 over 29s)  kubelet            Error: ErrImagePull
  Normal   BackOff    5s (x2 over 28s)   kubelet            Back-off pulling image "unreachableserver/nginx:1.14.22222"
  Warning  Failed     5s (x2 over 28s)   kubelet            Error: ImagePullBackOff

Additional debugging steps

  1. try to pull the docker image and tag manually on your computer
  2. Identify the node by doing a 'kubectl/oc get pods -o wide'
  3. ssh into the node (if you can) that can not pull the docker image
  4. check that the node can resolve the DNS of the docker registry by performing a ping.
  5. try to pull the docker image manually on the node
  6. If you are using a private registry, check that your secret exists and the secret is correct. Your secret should also be in the same namespace. Thanks swenzel
  7. Some registries have firewalls that limit ip address access. The firewall may block the pull
  8. Some CIs create deployments with temporary docker secrets. So the secret expires after a few days (You are asking for production failures...)

Solution 2 - Kubernetes

Try to edit to see what's wrong (I had the wrong image location):

kubectl edit pods arix-3-yjq9w

Or even delete your pod:

kubectl delete arix-3-yjq9w

Solution 3 - Kubernetes

I faced a similar situation and it turned out that with the actualisation of Docker Desktop I was signed out. After I signed back in, all worked fine again.

Solution 4 - Kubernetes

I ran into this issue on Google Kubernetes Engine (GKE), and the reason was no credentials for Docker.

Running this resolved it:

gcloud auth configure-docker

Solution 5 - Kubernetes

On GKE, if the pod is dead, it's best to check for the events. It will show in more detail what the error is about.

In my case, I had:

Failed to pull image "gcr.io/project/imagename@sha256:c8e91af54fc17faa1c49e2a05def5cbabf8f0a67fc558eb6cbca138061a8400a":
 rpc error: code = Unknown desc = error pulling image configuration: unknown blob

It turned out the image was damaged somehow. After repushing it and deploying with the new hash, it worked again.

In retrospective, I think the images got damaged, because the bucket in GCP that hosts the images had a clean up policy set on it, and that basically removed the images. As a result the message as above can be seen in events.

Other common issues are a wrong name (gcr.io vs eu.gcr.io) and it can also be that the registry cannot be reached somehow. Again, hints are in the events, the message there should tell you enough.

More general information can be found here (like for authentication):

Pushing and pulling images

Solution 6 - Kubernetes

I forgot to push the image tagged 1.0.8 to the ECR (AWS images hub)... If you are using Helm and upgrade by:

> helm upgrade minta-user ./src/services/user/helm-chart

make sure that the image tag inside file values.yaml is pushed (to ECR or Docker Hub, etc.). For example (this is my *helm-chart/values.yaml):

replicaCount: 1

image:
   repository:dkr.ecr.us-east-1.amazonaws.com/minta-user
   tag: 1.0.8

You need to make sure that the image:1.0.8 is pushed!

Solution 7 - Kubernetes

In my case, using a Fargate profile, I had the networking in my VPC configured incorrectly. The Fargate containers require access to ECR, which requires a route to the public Internet.

I had the NAT Gateways for my private subnets located in those same private subnets, when they should have been located in public subnets. This error message was the result of that misconfiguration in my case.

Solution 8 - Kubernetes

Run the below command:

eval $(minikube -p minikube docker-env)

Now build your images. Then use the same images in Kubernetes. Do this every time when you open new command line window.

Solution 9 - Kubernetes

Make sure your repo is publicly accessible. Mine was set as private and giving "ImagePullBackOff" status.

Solution 10 - Kubernetes

For Ubuntu Labs Go to your worker node then edit the following file

sudo vi /etc/resolv.conf

nameserver 8.8.8.8 Make this change and save it it will work for labs

Solution 11 - Kubernetes

I was facing the similar problem, but instead of one all of my pods were not ready and displaying Ready status 0/1

Something like:

Enter image description here

I tried a lot of things, but at last I found that the context was not correctly set.

Please use the following command and ensure you are in the correct context:

kubectl config get-contexts

Solution 12 - Kubernetes

Steps:

  • Run docker login.

  • Push the image to Docker Hub

  • Recreate the pod

This solved the problem for me.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionDevs love ZenUMLView Question on Stackoverflow
Solution 1 - KubernetesrjdkolbView Answer on Stackoverflow
Solution 2 - KubernetesClemens TolboomView Answer on Stackoverflow
Solution 3 - KubernetesDavid LoudaView Answer on Stackoverflow
Solution 4 - KubernetesMitziView Answer on Stackoverflow
Solution 5 - KubernetesVincent GerrisView Answer on Stackoverflow
Solution 6 - KubernetesdangView Answer on Stackoverflow
Solution 7 - KubernetesPowershell NoobView Answer on Stackoverflow
Solution 8 - KubernetesVivek RajView Answer on Stackoverflow
Solution 9 - KubernetesHassan RahmanView Answer on Stackoverflow
Solution 10 - KubernetesShivam PatilView Answer on Stackoverflow
Solution 11 - KubernetesHarshView Answer on Stackoverflow
Solution 12 - KubernetesShylaView Answer on Stackoverflow