kubernetes cannot pull local image

DockerKubernetesDocker Registry

Docker Problem Overview


I am using kubernetes on a single machine for testing, I have built a custom image from the nginx docker image, but when I try to use the image in kubernetes I get an image pull error?????

MY POD YAML

kind: Pod
apiVersion: v1
metadata:
  name: yumserver
  labels:
    name: frontendhttp
spec:
  containers:
    - name: myfrontend
      image: my/nginx:latest
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
      - mountPath: "/usr/share/nginx/html"
        name: mypd
  imagePullSecrets:
    - name: myregistrykey

  volumes:
    - name: mypd
      persistentVolumeClaim:
       claimName: myclaim-1

MY KUBERNETES COMMAND

kubectl create -f pod-yumserver.yaml

THE ERROR

kubectl describe pod yumserver


Name: yumserver
Namespace: default
Image(s):	my/nginx:latest
Node:		127.0.0.1/127.0.0.1
Start Time:	Tue, 26 Apr 2016 16:31:42 +0100
Labels:		name=frontendhttp
Status:		Pending
Reason:		
Message:	
IP:		172.17.0.2
Controllers:	<none>
Containers:
  myfrontend:
    Container ID:	
    Image:		my/nginx:latest
    Image ID:		
    QoS Tier:
      memory:		BestEffort
      cpu:		BestEffort
    State:		Waiting
      Reason:		ErrImagePull
    Ready:		False
    Restart Count:	0
    Environment Variables:
Conditions:
  Type		Status
  Ready 	False 
Volumes:
  mypd:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	myclaim-1
    ReadOnly:	false
  default-token-64w08:
    Type:	Secret (a secret that should populate this volume)
    SecretName:	default-token-64w08
Events:
  FirstSeen	LastSeen	Count	From			SubobjectPath			Type		Reason			Message
  ---------	--------	-----	----			-------------			--------	------			-------
  13s		13s		1	{default-scheduler }					Normal		Scheduled		Successfully assigned yumserver to 127.0.0.1
  13s		13s		1	{kubelet 127.0.0.1}					Warning		MissingClusterDNS	kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
  12s		12s		1	{kubelet 127.0.0.1}	spec.containers{myfrontend}	Normal		Pulling			pulling image "my/nginx:latest"
  8s		8s		1	{kubelet 127.0.0.1}	spec.containers{myfrontend}	Warning		Failed			Failed to pull image "my/nginx:latest": Error: image my/nginx:latest not found
  8s		8s		1	{kubelet 127.0.0.1}					Warning		FailedSync		Error syncing pod, skipping: failed to "StartContainer" for "myfrontend" with ErrImagePull: "Error: image my/nginx:latest not found"

Docker Solutions


Solution 1 - Docker

> So you have the image on your machine aready. It still tries to pull the image from Docker Hub, however, which is likely not what you want on your single-machine setup. This is happening because the latest tag sets the imagePullPolicy to Always implicitly. You can try setting it to IfNotPresent explicitly or change to a tag other than latest. – Timo Reimann Apr 28 at 7:16

For some reason Timo Reimann did only post this above as a comment, but it definitely should be the official answer to this question, so I'm posting it again.

Solution 2 - Docker

Run eval $(minikube docker-env) before building your image.

Full answer here: https://stackoverflow.com/a/40150867

Solution 3 - Docker

This should work irrespective of whether you are using minikube or not :

  1. Start a local registry container:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
  1. Do docker images to find out the REPOSITORY and TAG of your local image. Then create a new tag for your local image :
docker tag <local-image-repository>:<local-image-tag> localhost:5000/<local-image-name>

If TAG for your local image is <none>, you can simply do:

docker tag <local-image-repository> localhost:5000/<local-image-name>
  1. Push to local registry :
docker push localhost:5000/<local-image-name>

This will automatically add the latest tag to localhost:5000/<local-image-name>. You can check again by doing docker images.

  1. In your yaml file, set imagePullPolicy to IfNotPresent :
...
spec:
  containers:
  - name: <name>
    image: localhost:5000/<local-image-name>
    imagePullPolicy: IfNotPresent
...

That's it. Now your ImagePullError should be resolved.

Note: If you have multiple hosts in the cluster, and you want to use a specific one to host the registry, just replace localhost in all the above steps with the hostname of the host where the registry container is hosted. In that case, you may need to allow HTTP (non-HTTPS) connections to the registry:

5 (optional). Allow connection to insecure registry in worker nodes:

sudo echo '{"insecure-registries":["<registry-hostname>:5000"]}' > /etc/docker/daemon.json

Solution 4 - Docker

just add imagePullPolicy to your deployment file it worked for me

 spec:
  containers:
  - name: <name>
    image: <local-image-name>
    imagePullPolicy: Never

Solution 5 - Docker

The easiest way to further analysis ErrImagePull problems is to ssh into the node and try to pull the image manually by doing docker pull my/nginx:latest. I've never set up Kubernetes on a single machine but could imagine that the Docker daemon isn't reachable from the node for some reason. A handish pull attempt should provide more information.

Solution 6 - Docker

If you are using a vm driver, you will need to tell Kubernetes to use the Docker daemon running inside of the single node cluster instead of the host.

Run the following command:

eval $(minikube docker-env)

Note - This command will need to be repeated anytime you close and restart the terminal session.

Afterward, you can build your image:

docker build -t USERNAME/REPO .

Update, your pod manifest as shown above and then run:

kubectl apply -f myfile.yaml

Solution 7 - Docker

Are you using minikube on linux? You need to install docker ( I think), but you don't need to start it. Minikube will do that. Try using the KVM driver with this command:

minikube start --vm-driver kvm

Then run the eval $(minikube docker-env) command to make sure you use the minikube docker environment. build your container with a tag build -t mycontainername:version .

if you then type docker ps you should see a bunch of minikube containers already running. kvm utils are probably already on your machine, but they can be installed like this on centos/rhel:

yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python 

Solution 8 - Docker

Make sure that your "Kubernetes Context" in Docker Desktop is actually a "docker-desktop" (i.e. not a remote cluster).

(Right click on Docker icon, then select "Kubernetes" in menu)

Solution 9 - Docker

All you need to do is just do a docker build from your dockerfile, or get all the images on the nodes of your cluster, do a suitable docker tag and create the manifest.

Kubernetes doesn't directly pull from the registry. First it searches for the image on local storage and then docker registry.

  1. Pull latest nginx image

docker pull nginx

docker tag nginx:latest test:test8970

  1. Create a deployment kubectl run test --image=test:test8970 It won't go to docker registry to pull the image. It will bring up the pod instantly. enter image description here

  2. And if image is not present on local machine it will try to pull from docker registry and fail with ErrImagePull error. enter image description here

  3. Also if you change the imagePullPolicy: Never. It will never look for the registry to pull the image and will fail if image is not found with error ErrImageNeverPull. enter image description here

kind: Deployment
metadata:
  labels:
    run: test
  name: test
spec:
  replicas: 1
  selector:
    matchLabels:
      run: test
  template:
    metadata:
      creationTimestamp: null
      labels:
        run: test
    spec:
      containers:
      - image: test:test8070
        name: test
        imagePullPolicy: Never

Solution 10 - Docker

Adding another answer here as the above gave me enough to figure out the cause of my particular instance of this issue. Turns out that my build process was missing the tagging needed to make :latest work. As soon as I added a <tags> section to my docker-maven-plugin configuration in my pom.xml, everything was hunky-dory. Here's some example configuration:

<plugin>
    <groupId>io.fabric8</groupId>
    <artifactId>docker-maven-plugin</artifactId>
    <version>0.27.2</version>
    <configuration>
        <images>
            
        </images>
    </configuration>
</plugin>

Solution 11 - Docker

I was facing similar issue .Image was present in local but k8s was not able to pick it up. So I went to terminal ,deleted the old image and ran eval $(minikube -p minikube docker-env) command. Rebuilt the image and the redeployed the deployment yaml ,and it worked

Solution 12 - Docker

in your case your yaml file should have imagePullPolicy: Never see below

kind: Pod
apiVersion: v1
metadata:
  name: yumserver
  labels:
name: frontendhttp
spec:
  containers:
    - name: myfrontend
      image: my/nginx:latest
      imagePullPolicy: Never
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
      - mountPath: "/usr/share/nginx/html"
        name: mypd
  imagePullSecrets:
    - name: myregistrykey

  volumes:
    - name: mypd
      persistentVolumeClaim:
       claimName: myclaim-1

found this here https://keepforyourself.com/docker/run-a-kubernetes-pod-locally/

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestiongindaView Question on Stackoverflow
Solution 1 - DockerMartin RauscherView Answer on Stackoverflow
Solution 2 - DockerAlinView Answer on Stackoverflow
Solution 3 - DockerdryairshipView Answer on Stackoverflow
Solution 4 - DockerYassine HakimView Answer on Stackoverflow
Solution 5 - DockerTimo ReimannView Answer on Stackoverflow
Solution 6 - DockerttfreemanView Answer on Stackoverflow
Solution 7 - DockeranwebView Answer on Stackoverflow
Solution 8 - DockerbeloblotskiyView Answer on Stackoverflow
Solution 9 - DockerredzackView Answer on Stackoverflow
Solution 10 - DockerJack PinesView Answer on Stackoverflow
Solution 11 - DockerRama SharmaView Answer on Stackoverflow
Solution 12 - Dockerd3javu999View Answer on Stackoverflow