How to run kubectl commands inside a container?

DockerKubernetesDockerfile

Docker Problem Overview


In a container inside a pod, how can I run a command using kubectl? For example, if i need to do something like this inside a container:

> kubectl get pods

I have tried this : In my dockerfile, I have these commands :

RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN sudo mv ./kubectl /usr/local/bin/kubectl

> EDIT : I was trying the OSX file, I have corrected it to the linux binary file. (corrected by @svenwltr

While creating the docker file, this is successful, but when I run the kubectl get pods inside a container,

kubectl get pods

I get this error : > The connection to the server : was refused - did you specify the right host or port?

When I was deploying locally, I was encountering this error if my docker-machine was not running, but inside a container how can a docker-machine be running?

Locally, I get around this error by running the following commands: (dev is the name of the docker-machine)

docker-machine env dev
eval $(docker-machine env dev)

Can someone please tell me what is it that I need to do?

Docker Solutions


Solution 1 - Docker

I would use kubernetes api, you just need to install curl, instead of kubectl and the rest is restful.

curl http://localhost:8080/api/v1/namespaces/default/pods

Im running above command on one of my apiservers. Change the localhost to apiserver ip address/dns name.

Depending on your configuration you may need to use ssl or provide client certificate.

In order to find api endpoints, you can use --v=8 with kubectl.

example:

kubectl get pods --v=8

Resources:

Kubernetes API documentation

Update for RBAC:

I assume you already configured rbac, created a service account for your pod and run using it. This service account should have list permissions on pods in required namespace. In order to do that, you need to create a role and role binding for that service account.

Every container in a cluster is populated with a token that can be used for authenticating to the API server. To verify, Inside the container run:

cat /var/run/secrets/kubernetes.io/serviceaccount/token

To make request to apiserver, inside the container run:

curl -ik \
     -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
     https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods

Solution 2 - Docker

Bit late to the party here, but this is my two cents:

I've found using kubectl within a container much easier than calling the cluster's api

(Why? Auto authentication!)

Say you're deploying a Node.js project that needs kubectl usage.

  1. Download & Build kubectl inside the container
  2. Build your application, copying kubectl to your container
  3. Voila! kubectl provides a rich cli for managing your kubernetes cluster

Helpful documentation

--- EDITS ---

After working with kubectl in my cluster pods, I found a more effective way to authenticate pods to be able to make k8s API calls. This method provides stricter authentication.

  1. Create a ServiceAccount for your pod, and configure your pod to use said account. k8s Service Account docs
  2. Configure a RoleBinding or ClusterRoleBinding to allow services to have the authorization to communicate with the k8s API. k8s Role Binding docs
  3. Call the API directly, or use a the k8s-client to manage API calls for you. I HIGHLY recommend using the client, it has automatic configuration for pods which removes the authentication token step required with normal requests.

When you're done, you will have the following: ServiceAccount, ClusterRoleBinding, Deployment (your pods)

Feel free to comment if you need some clearer direction, I'll try to help out as much as I can :)

All-in-on example
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: k8s-101
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: k8s-101
    spec:
      serviceAccountName: k8s-101-role
      containers:
      - name: k8s-101
        imagePullPolicy: Always
        image: salathielgenese/k8s-101
        ports:
        - name: app
          containerPort: 3000
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: k8s-101-role
subjects:
- kind: ServiceAccount
  name: k8s-101-role
  namespace: default
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: k8s-101-role

The salathielgenese/k8s-101 image contains kubectl. So one can just log into a pod container & execute kubectl as if he was running it on k8s host: kubectl exec -it pod-container-id -- kubectl get pods

Solution 3 - Docker

First Question

> /usr/local/bin/kubectl: cannot execute binary file

It looks like you downloaded the OSX binary for kubectl. When running in Docker you probably need the Linux one:

https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
Second Question

If you run kubectl in a proper configured Kubernetes cluster, it should be able to connect to the apiserver.

kubectl basically uses this code to find the apiserver and authenticate: github.com/kubernetes/client-go/rest.InClusterConfig

This means:

  • The host and port of the apiserver are stored in the environment variables KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT.
  • The access token is mounted to var/run/secrets/kubernetes.io/serviceaccount/token.
  • The server certificate is mounted to /var/run/secrets/kubernetes.io/serviceaccount/ca.crt.

This is all data kubectl needs to know to connect to the apiserver.

Some thoughts why this might won't work:

  • The container doesn't run in Kubernetes.
    • It's not enough to use the same Docker host; the container needs to run as part of a pod definition.
  • The access is restricted by using an authorization plugin (which is not the default).
  • The service account credentials are overwritten by the pod definition (spec.serviceAccountName).

Solution 4 - Docker

I just faced this concept again. It is absolutely possible but let's don't give "cluster-admin privileges in with ClusterRole that container for security reasons.

Let's say we want to deploy a pod in the cluster with access to view and create pods only in a specific namespace in the cluster. In this case, a ServiceAccount could look like:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: spinupcontainers
subjects:
- kind: ServiceAccount
  name: spinupcontainers
  namespace: <YOUR_NAMESPACE>
roleRef:
  kind: Role
  name: spinupcontainers
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: spinupcontainers
  # "namespace" omitted if was ClusterRoles because are not namespaced
  namespace: <YOUR_NAMESPACE>
  labels:
    k8s-app: <YOUR_APP_LABEL>
rules:
#
# Give here only the privileges you need
#
- apiGroups: [""]
  resources:
  - pods
  verbs:
  - create
  - update
  - patch
  - delete
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: spinupcontainers
  namespace: <MY_NAMESPACE>
  labels:
    k8s-app: <MY_APP_LABEL>
---

If you apply the service account in your deployment with serviceAccountName: spinupcontainers in the container specs you don't need to mount any additional volumes secrets or attach manually certifications. kubectl client will get the required tokens from /var/run/secrets/kubernetes.io/serviceaccount. Then you can test if is working with something like:

$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl get pods -n <YOUR_NAMESPACE>
NAME.        READY   STATUS    RESTARTS   AGE
pod1-0       1/1     Running   0          6d17h
pod2-0       1/1     Running   0          6d16h
pod3-0       1/1     Running   0          6d17h
pod3-2       1/1     Running   0          67s

or permission denied:

$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl get pods -n kube-system
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:spinupcontainers" cannot list resource "pods" in API group "" in the namespace "kube-system"
command terminated with exit code 1

Tested on:

$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl versionClient Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}

Solution 5 - Docker

Combined from all above. This did the trick for me. Retrieving all pods from within a container.

curl --insecure -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"  https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT/api/v1/namespaces/default/pods

See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#-strong-read-operations-pod-v1-core-strong- for the REST API.

Solution 6 - Docker

To run kubectl commands inside a container. It would take 3 steps

  1. Install kubectl
RUN printf '[kubernetes] \nname = Kubernetes\nbaseurl = https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64\nenabled = 1\ngpgcheck = 1\nrepo_gpgcheck=1\ngpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg' \
  | tee /etc/yum.repos.d/kubernetes.repo \
  && cat  /etc/yum.repos.d/kubernetes.repo \
  && yum install -y kubectl

  1. Create ClusterAdminRole Binding role for service account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: mysa-admin-sa
  namespace: mysa
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: mysa-admin-sa
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: mysa-admin-sa
    namespace: mysa

3- Example of cronjob configuration

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: scaleup
  namespace: myapp
spec:
  schedule: "00 5 * * 1-5"
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: mysa-admin-sa
          restartPolicy: OnFailure
          containers:
          - name: scale-up
            image: myimage:test
            imagePullPolicy: Always
            command: ["/bin/sh"]
            args: ["-c", "mykubcmd_script >>/mylog.log"]

Solution 7 - Docker

  1. To run a command inside a pod with single container use below command

kubectl --exec -it <pod-name> -- <command-name>

  1. To run a command inside a pod with multiple containers use below command

kubectl --exec -it <pod-name> -c <container-name> -- <command-name>

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionDreamsView Question on Stackoverflow
Solution 1 - DockerFarhad FarahiView Answer on Stackoverflow
Solution 2 - DockermsterView Answer on Stackoverflow
Solution 3 - DockersvenwltrView Answer on Stackoverflow
Solution 4 - DockerNick GView Answer on Stackoverflow
Solution 5 - DockeruserM1433372View Answer on Stackoverflow
Solution 6 - DockerAmit SinghView Answer on Stackoverflow
Solution 7 - DockerAditya BhuyanView Answer on Stackoverflow