Kubernetes service external ip pending

NginxKubernetesLoad Balancing

Nginx Problem Overview


I am trying to deploy nginx on kubernetes, kubernetes version is v1.5.2, I have deployed nginx with 3 replica, YAML file is below,

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: deployment-example
spec:
  replicas: 3
  revisionHistoryLimit: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.10
        ports:
        - containerPort: 80

and now I want to expose its port 80 on port 30062 of node, for that I created a service below,

kind: Service
apiVersion: v1
metadata:
  name: nginx-ils-service
spec:
  ports:
    - name: http
      port: 80
      nodePort: 30062
  selector:
    app: nginx
  type: LoadBalancer

this service is working good as it should be, but it is showing as pending not only on kubernetes dashboard also on terminal. Terminal outputDash board status

Nginx Solutions


Solution 1 - Nginx

It looks like you are using a custom Kubernetes Cluster (using minikube, kubeadm or the like). In this case, there is no LoadBalancer integrated (unlike AWS or Google Cloud). With this default setup, you can only use NodePort or an Ingress Controller.

With the Ingress Controller you can setup a domain name which maps to your pod; you don't need to give your Service the LoadBalancer type if you use an Ingress Controller.

Solution 2 - Nginx

If you are using Minikube, there is a magic command!

$ minikube tunnel

Hopefully someone can save a few minutes with this.

Reference link https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel

Solution 3 - Nginx

If you are not using GCE or EKS (you used kubeadm) you can add an externalIPs spec to your service YAML. You can use the IP associated with your node's primary interface such as eth0. You can then access the service externally, using the external IP of the node.

...
spec:
  type: LoadBalancer
  externalIPs:
  - 192.168.0.10

Solution 4 - Nginx

I created a single node k8s cluster using kubeadm. When i tried PortForward and kubectl proxy, it showed external IP as pending.

$ kubectl get svc -n argocd argocd-server
NAME            TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
argocd-server   LoadBalancer   10.107.37.153   <pending>     80:30047/TCP,443:31307/TCP   110s

In my case I've patched the service like this:

kubectl patch svc <svc-name> -n <namespace> -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.31.71.218"]}}'

After this, it started serving over the public IP

$ kubectl get svc argo-ui -n argo
NAME      TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)        AGE
argo-ui   LoadBalancer   10.103.219.8   172.31.71.218   80:30981/TCP   7m50s

Solution 5 - Nginx

To access a service on minikube, you need to run the following command:

minikube service [-n NAMESPACE] [--url] NAME

More information here : Minikube GitHub

Solution 6 - Nginx

When using Minikube, you can get the IP and port through which you can access the service by running:

minikube service [service name]

E.g.:

minikube service kubia-http

Solution 7 - Nginx

If it is your private k8s cluster, MetalLB would be a better fit. Below are the steps.

Step 1: Install MetalLB in your cluster

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

Step 2: Configure it by using a configmap

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.42.42.100-172.42.42.105 #Update this with your Nodes IP range 

Step 3: Create your service to get an external IP (would be a private IP though).

FYR:

Before MetalLB installation: enter image description here

After MetalLB installation: enter image description here

enter image description here

Solution 8 - Nginx

If running on minikube, don't forget to mention namespace if you are not using default.

minikube service << service_name >> --url --namespace=<< namespace_name >>

Solution 9 - Nginx

If you are using minikube then run commands below from terminal,

$ minikube ip
$ 172.17.0.2 // then 
$ curl http://172.17.0.2:31245
or simply
$ curl http://$(minikube ip):31245

Solution 10 - Nginx

Following @Javier's answer. I have decided to go with "patching up the external IP" for my load balancer.

 $ kubectl patch service my-loadbalancer-service-name \
-n lb-service-namespace \
-p '{"spec": {"type": "LoadBalancer", "externalIPs":["192.168.39.25"]}}'

This will replace that 'pending' with a new patched up IP address you can use for your cluster.

For more on this. Please see karthik's post on LoadBalancer support with Minikube for Kubernetes

Not the cleanest way to do it. I needed a temporary solution. Hope this helps somebody.

Solution 11 - Nginx

In case someone is using MicroK8s: You need a network load balancer.

MicroK8s comes with metallb, you can enable it like this:

microk8s enable metallb

<pending> should turn into an actual IP address then.

Solution 12 - Nginx

A general way to expose an application running on a set of Pods as a network service is called service in Kubernetes. There are four types of service in Kubernetes.

ClusterIP The Service is only reachable from within the cluster.

NodePort You'll be able to communicate the Service from outside the cluster using NodeIP:NodePort.default node port range is 30000-32767, and this range can be changed by define --service-node-port-range in the time of cluster creation.

LoadBalancer Exposes the Service externally using a cloud provider's load balancer.

ExternalName Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.

Only the LoadBalancer gives value for the External-IP Colum. and it only works if the Kubernetes cluster is able to assign an IP address for that particular service. you can use metalLB load balancer for provision IPs to your load balancer services.

I hope your doubt may go away.

Solution 13 - Nginx

Use NodePort:

$ kubectl run user-login --replicas=2 --labels="run=user-login" --image=kingslayerr/teamproject:version2  --port=5000

$ kubectl expose deployment user-login --type=NodePort --name=user-login-service

$ kubectl describe services user-login-service

(Note down the port)

$ kubectl cluster-info

(IP-> Get The IP where master is running)

Your service is accessible at (IP):(port)

Solution 14 - Nginx

Adding a solution for those who encountered this error while running on [tag:amazon-eks].

First of all run:

kubectl describe svc <service-name>

And then review the events field in the example output below:

Name:                     some-service
Namespace:                default
Labels:                   <none>
Annotations:              kubectl.kubernetes.io/last-applied-configuration:
                            {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"some-service","namespace":"default"},"spec":{"ports":[{"port":80,...
Selector:                 app=some
Type:                     LoadBalancer
IP:                       10.100.91.19
Port:                     <unset>  80/TCP
TargetPort:               5000/TCP
NodePort:                 <unset>  31022/TCP
Endpoints:                <none>
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type     Reason                  Age        From                Message
  ----     ------                  ----       ----                -------
  Normal   EnsuringLoadBalancer    68s  service-controller  Ensuring load balancer
  Warning  SyncLoadBalancerFailed  67s  service-controller  Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB

Review the error message:

Failed to ensure load balancer: could not find any suitable subnets for creating the ELB

In my case, the reason that no suitable subnets were provided for creating the ELB were:

1: The EKS cluster was deployed on the wrong subnets group - internal subnets instead of public facing.
(*) By default, services of type LoadBalancer create public-facing load balancers if no service.beta.kubernetes.io/aws-load-balancer-internal: "true" annotation was provided).

2: The Subnets weren't tagged according to the requirements mentioned here.

Tagging VPC with:

Key: kubernetes.io/cluster/yourEKSClusterName
Value: shared

Tagging public subnets with:

Key: kubernetes.io/role/elb
Value: 1

Solution 15 - Nginx

You can patch the IP of Node where pods are hosted ( Private IP of Node ) , this is the easy workaround .

Taking reference with above posts , Following worked for me :

kubectl patch service my-loadbalancer-service-name
-n lb-service-namespace
-p '{"spec": {"type": "LoadBalancer", "externalIPs":["xxx.xxx.xxx.xxx Private IP of Physical Server - Node - where deployment is done "]}}'

Solution 16 - Nginx

The LoadBalancer ServiceType will only work if the underlying infrastructure supports the automatic creation of Load Balancers and have the respective support in Kubernetes, as is the case with the Google Cloud Platform and AWS. If no such feature is configured, the LoadBalancer IP address field is not populated and still in pending status , and the Service will work the same way as a NodePort type Service

Solution 17 - Nginx

same issue: >os>kubectl get svc right-sabertooth-wordpress

>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
>right-sabertooth-wordpress LoadBalancer 10.97.130.7 "pending" 80:30454/TCP,443:30427/TCP

>os>minikube service list

>|-------------|----------------------------|--------------------------------|

>| NAMESPACE | NAME | URL |

>|-------------|----------------------------|--------------------------------|

>| default | kubernetes | No node port |

>| default | right-sabertooth-mariadb | No node port |

>| default | right-sabertooth-wordpress | http://192.168.99.100:30454 |

>| | | http://192.168.99.100:30427 |

>| kube-system | kube-dns | No node port |

>| kube-system | tiller-deploy | No node port |

>|-------------|----------------------------|--------------------------------|

It is, however,accesible via that http://192.168.99.100:30454.

Solution 18 - Nginx

There are three types of exposing your service Nodeport ClusterIP LoadBalancer

When we use a loadbalancer we basically ask our cloud provider to give us a dns which can be accessed online Note not a domain name but a dns.

So loadbalancer type does not work in our local minikube env.

Solution 19 - Nginx

If you are using a bare metal you need the NodePort type https://kubernetes.github.io/ingress-nginx/deploy/baremetal/

LoadBalancer works by default in other cloud providers like Digital Ocean, Aws, etc

k edit service ingress-nginx-controller
type: NodePort

spec:
   externalIPs:
   - xxx.xxx.xxx.xx 

using the public IP

Solution 20 - Nginx

Check kube-controller logs. I was able to solve this issue by setting the clusterID tags to the ec2 instance I deployed the cluster on.

Solution 21 - Nginx

If you are not on a supported cloud (aws, azure, gcloud etc..) you can't use LoadBalancer without MetalLB https://metallb.universe.tf/ but it's in beta yet..

Solution 22 - Nginx

May be the subnet in which you are deploying your service, have not enough ip's

Solution 23 - Nginx

For your use case best option is to use NordPort service instead of loadbalancer type because loadbalancer is not available.

Solution 24 - Nginx

I was getting this error on the Docker-desktop. I just exit and turn it on again(Docker-desktop). It took few seconds, then It worked fine.

Solution 25 - Nginx

minikube tunnel

The below solution works in my case.

First of all, try this command:

minikube tunnel

If it's not working for you. follow the below:

I restart minikube container.

docker minikube stop 

then

docker minikube start

After that re-run kubernetes

minikube dashboard

After finish execute :

 minikube tunnel

Solution 26 - Nginx

I had same issue with AWS EKS

Here how it got resolved:

> The correct tags for your Amazon Virtual Private Cloud (Amazon VPC) > subnets > > The required AWS Identity and Access Management (IAM) permissions for > your cluster's IAM role A valid Kubernetes service definition Load > balancers that stay within your account limit Enough free IP addresses > on your subnets

Need to ensure following tags

Key: kubernetes.io/cluster/yourEKSClusterName
Value: shared

Key: kubernetes.io/role/elb
Value: 1

Key: kubernetes.io/role/internal-elb
Value: 1

FYI, Also ensure sts is enabled for the region you are working on sts settings can be found under users, region settings.

Solution 27 - Nginx

I have the same problem. Windows 10 Desktop + Docker Desktop 4.7.1 (77678) + Minikube v1.25.2

Following the official docs on my side, I resolve with:

PS C:\WINDOWS\system32> kubectl expose deployment sito-php --type=LoadBalancer --port=8080 --name=servizio-php
service/servizio-php exposed
PS C:\WINDOWS\system32> minikube tunnel
 * Tunnel successfully started

 * NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...

 * Starting tunnel for service servizio-php.


PS E:\docker\apache-php> kubectl get service
NAME           TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubernetes     ClusterIP      10.96.0.1      <none>        443/TCP          33h
servizio-php   LoadBalancer   10.98.218.86   127.0.0.1     8080:30270/TCP   4m39s

The open browser on http://127.0.0.1:8080/

Solution 28 - Nginx

Delete existing service and create a same new service solved my problems. My problems is that the loading balancing IP I defines is used so that external endpoint is pending. When I changed a new load balancing IP it still couldn't work.

Finally, delete existing service and create a new one solved my problem.

Solution 29 - Nginx

If you are trying to do this in your on-prem cloud, you need an L4LB service to create the LB instances.

Otherwise you end up with the endless "pending" message you described. It is visible in a video here: https://www.youtube.com/watch?v=p6FYtNpsT1M

You can use open source tools to solve this problem, the video provides some guidance on how the automation process should work.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionPankaj JacksonView Question on Stackoverflow
Solution 1 - NginxJavier SalmeronView Answer on Stackoverflow
Solution 2 - NginxPeter ZhouView Answer on Stackoverflow
Solution 3 - NginxtoppurView Answer on Stackoverflow
Solution 4 - Nginxsurya vallabhaneniView Answer on Stackoverflow
Solution 5 - NginxSaket JainView Answer on Stackoverflow
Solution 6 - NginxHammad AsadView Answer on Stackoverflow
Solution 7 - NginxThileeView Answer on Stackoverflow
Solution 8 - NginxMohsinView Answer on Stackoverflow
Solution 9 - Nginxuser12640668View Answer on Stackoverflow
Solution 10 - NginxGodfreyView Answer on Stackoverflow
Solution 11 - NginxWilli MentzelView Answer on Stackoverflow
Solution 12 - NginxDamith UdayangaView Answer on Stackoverflow
Solution 13 - NginxShubham SawantView Answer on Stackoverflow
Solution 14 - NginxRtmYView Answer on Stackoverflow
Solution 15 - NginxDheeraj SharmaView Answer on Stackoverflow
Solution 16 - NginxMedoneView Answer on Stackoverflow
Solution 17 - Nginxsystem programmerView Answer on Stackoverflow
Solution 18 - NginxSunjay JeffrishView Answer on Stackoverflow
Solution 19 - NginxMathView Answer on Stackoverflow
Solution 20 - NginxRakesh PelluriView Answer on Stackoverflow
Solution 21 - NginxPierluigi Di LorenzoView Answer on Stackoverflow
Solution 22 - NginxHemant yadavView Answer on Stackoverflow
Solution 23 - Nginxdeepu kumar singhView Answer on Stackoverflow
Solution 24 - NginxDayanView Answer on Stackoverflow
Solution 25 - NginxAbd AbughazalehView Answer on Stackoverflow
Solution 26 - NginxGruView Answer on Stackoverflow
Solution 27 - NginxMax MonterumisiView Answer on Stackoverflow
Solution 28 - NginxShakira SunView Answer on Stackoverflow
Solution 29 - NginxJoshua SaulView Answer on Stackoverflow