Checking kubernetes pod CPU and memory

Kubernetes

Kubernetes Problem Overview


I am trying to see how much memory and CPU is utilized by a kubernetes pod. I ran the following command for this:

kubectl top pod podname --namespace=default

I am getting the following error:

W0205 15:14:47.248366    2767 top_pod.go:190] Metrics not available for pod default/podname, age: 190h57m1.248339485s
error: Metrics not available for pod default/podname, age: 190h57m1.248339485s
  1. What do I do about this error? Is there any other way to get CPU and memory usage of the pod?

  2. I saw the sample output of this command which shows CPU as 250m. How is this to be interpreted?

  3. Do we get the same output if we enter the pod and run the linux top command?

Kubernetes Solutions


Solution 1 - Kubernetes

> CHECK WITHOUT METRICS SERVER or ANY THIRD PARTY TOOL


If you want to check pods cpu/memory usage without installing any third party tool then you can get memory and cpu usage of pod from cgroup.

  1. Go to pod's exec mode kubectl exec -it pod_name -- /bin/bash
  2. Go to cd /sys/fs/cgroup/cpu for cpu usage run cat cpuacct.usage
  3. Go to cd /sys/fs/cgroup/memory for memory usage run cat memory.usage_in_bytes

Make Sure you have added the resources section (requests and limits) to deployment so that it can calculate the usage based on cgroup and container will respect the limits set on pod level > > NOTE: This usage is in bytes. This can vary upon pod usage and these values changes frequently.

Solution 2 - Kubernetes

kubectl top pod <pod-name> -n <fed-name> --containers

FYI, this is on v1.16.2

Solution 3 - Kubernetes

  1. As described in the docs, you should install metrics-server

  2. 250m means 250 milliCPU, The CPU resource is measured in CPU units, in Kubernetes, is equivalent to:

    • 1 AWS vCPU

    • 1 GCP Core

    • 1 Azure vCore

    • 1 Hyperthread on a bare-metal Intel processor with Hyperthreading

    > Fractional values are allowed. A Container that requests 0.5 CPU is > guaranteed half as much CPU as a Container that requests 1 CPU. You > can use the suffix m to mean milli. For example 100m CPU, 100 > milliCPU, and 0.1 CPU are all the same. Precision finer than 1m is not > allowed. > > CPU is always requested as an absolute quantity, never as a relative > quantity; 0.1 is the same amount of CPU on a single-core, dual-core, > or 48-core machine.

  3. No, kubectl top pod podname shows metrics for a given pod, Linux top and free runs inside a Container and report metrics based on Linux system reporting based on the information stored in the virtual filesystem /proc/, they are not aware of the cgroup where it runs.

There are more details on these links:

Solution 4 - Kubernetes

Use k9s for a super easy way to check all your resources' cpu and memory usage.

enter image description here

Solution 5 - Kubernetes

You need to run metric server to make below commands working with correct data:

  1. kubectl get hpa
  2. kubectl top node
  3. kubectl top pods

Without metric server: Go into the pod by running below command:

  1. kubectl exec -it pods/{pod_name} sh
  2. cat /sys/fs/cgroup/memory/memory.usage_in_bytes

You will get memory usage of pod in bytes.

Solution 6 - Kubernetes

A quick way to check CPU/Memory is by using the following kubectl command. I found it very useful.

kubectl describe PodMetrics <pod_name>

replace with the pod name you get by using

kubectl get pod

Solution 7 - Kubernetes

Not sure why it's not here

  1. To see all pods with time alive - kubectl get pods --all-namespaces
  2. To see memory and CPU - kubectl top pods --all-namespaces

Solution 8 - Kubernetes

As heapster is deprecated and will not be releasing any future releases, you should go with installing metrics-server

You can install metrics-server in following way:

  1. Clone the metrics-server github repo: git clone https://github.com/kubernetes-incubator/metrics-server.git

Edit the deploy/1.8+/metrics-server-deployment.yaml file and add following section just after command section:

- command:
     - /metrics-server
     - --metric-resolution=30s
     - --kubelet-insecure-tls
     - --kubelet-preferred-address-types=InternalIP

3. Run the following command: kubectl apply -f deploy/1.8+

It will install all the requirements you need for metrics server.

For more info, please have a look at my following answer:

> https://stackoverflow.com/questions/53725248/how-to-enable-kubeapi-server-for-hpa-autoscaling-metrics/53727101#53727101

Solution 9 - Kubernetes

To check the usage of individual pods in Kubernetes type the following commands in terminal

$ docker ps | grep

This will give your list of running containers in Kubernetes To check CPU and memory utilization using

$ docker stats

CONTAINER_ID  NAME   CPU%   MEM   USAGE/LIMIT   MEM%   NET_I/O   BLOCK_I/O   PIDS

Solution 10 - Kubernetes

An alternative approach without having to install the metrics server.

It requires you to currently install crictl into Worker Nodes where pods are installed. There is Kubernetes task defined in official doc.

Once, you have installed it properly you can use the below commands. (I had to use sudo in my case, but, probably may not be required depending on your Kubernetes Cluster install)

  1. Find your container id of the pod sudo crictl ps
  2. use stats to get CPU and RAM sudo crictl stats <CONTAINERID>

Sample output for reference:

CONTAINER           CPU %               MEM                 DISK                INODES
873f04b6cef94       0.50                54.16MB             28.67kB             8

Solution 11 - Kubernetes

you need to deploy heapster or metric server to see the cpu and memory usage of the pods

Solution 12 - Kubernetes

In case you are using minikube, you can enable the metrics-server addon; this will show the information in the dashboard.

Solution 13 - Kubernetes

If you exec into your pod, using sh or bash, you can run the top command which will give you some stats about resource utilisation that updates every few moments.

enter image description here

Solution 14 - Kubernetes

You can use API as defined here:

For example:

kubectl -n default get --raw /apis/metrics.k8s.io/v1beta1/namespaces/default/pods/nginx-7fb5bc5df-b6pzh | jq

{
  "kind": "PodMetrics",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "name": "nginx-7fb5bc5df-b6pzh",
    "namespace": "default",
    "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/nginx-7fb5bc5df-b6pzh",
    "creationTimestamp": "2021-06-14T07:54:31Z"
  },
  "timestamp": "2021-06-14T07:53:54Z",
  "window": "30s",
  "containers": [
    {
      "name": "nginx",
      "usage": {
        "cpu": "33239n",
        "memory": "13148Ki"
      }
    },
    {
      "name": "git-repo-syncer",
      "usage": {
        "cpu": "0",
        "memory": "6204Ki"
      }
    }
  ]
}

Where nginx-7fb5bc5df-b6pzh is pod's name.

Pay attention CPU is measured in nanoCPUs where 1x10E9 nanoCPUs = 1 CPU

Solution 15 - Kubernetes

If you use Prometheus operator or VictoriaMetrics operator for Kubernetes monitoring, then the following PromQL queries can be used for determining per-container, per-pod and per-node resource usage:

  • Per-container memory usage in bytes:
sum(container_memory_usage_bytes{container!~"POD|"}) by (namespace,pod,container)
  • Per-container CPU usage in CPU cores:
sum(rate(container_cpu_usage_seconds_total{container!~"POD|"}[5m])) by (namespace,pod,container)
  • Per-pod memory usage in bytes:
sum(container_memory_usage_bytes{container!=""}) by (namespace,pod)
  • Per-pod CPU usage in CPU cores:
sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (namespace,pod)
  • Per-node memory usage in bytes:
sum(container_memory_usage_bytes{container!=""}) by (node)
  • Per-node CPU usage in CPU cores:
sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (node)
  • Per-node memory usage percentage:
100 * (
  sum(container_memory_usage_bytes{container!=""}) by (node)
    / on(node)
  kube_node_status_capacity{resource="memory"}
)
  • Per-node CPU usage percentage:
100 * (
  sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (node)
    / on(node)
  kube_node_status_capacity{resource="cpu"}
)

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionaniztarView Question on Stackoverflow
Solution 1 - KubernetesDashrath MundkarView Answer on Stackoverflow
Solution 2 - KubernetesUmakantView Answer on Stackoverflow
Solution 3 - KubernetesDiego MendesView Answer on Stackoverflow
Solution 4 - KubernetesNickView Answer on Stackoverflow
Solution 5 - Kuberneteschetan mahajanView Answer on Stackoverflow
Solution 6 - KubernetesSuvoraj BiswasView Answer on Stackoverflow
Solution 7 - Kubernetesshimi_tapView Answer on Stackoverflow
Solution 8 - KubernetesPrafull LadhaView Answer on Stackoverflow
Solution 9 - KubernetesAmbirView Answer on Stackoverflow
Solution 10 - KubernetesprasunView Answer on Stackoverflow
Solution 11 - KubernetesP EkambaramView Answer on Stackoverflow
Solution 12 - KubernetesJavier AvilesView Answer on Stackoverflow
Solution 13 - KubernetesIan RobertsonView Answer on Stackoverflow
Solution 14 - KubernetesdrFunJohnView Answer on Stackoverflow
Solution 15 - KubernetesvalyalaView Answer on Stackoverflow