Pod Overhead

    When you run a Pod on a Node, the Pod itself takes an amount of system resources. These resources are additional to the resources needed to run the container(s) inside the Pod. Pod Overhead is a feature for accounting for the resources consumed by the Pod infrastructure on top of the container requests & limits.

    In Kubernetes, the Pod’s overhead is set at time according to the overhead associated with the Pod’s RuntimeClass.

    When Pod Overhead is enabled, the overhead is considered in addition to the sum of container resource requests when scheduling a Pod. Similarly, Kubelet will include the Pod overhead when sizing the Pod cgroup, and when carrying out Pod eviction ranking.

    You need to make sure that the PodOverhead is enabled (it is on by default as of 1.18) across your cluster, and a RuntimeClass is utilized which defines the overhead field.

    To use the PodOverhead feature, you need a RuntimeClass that defines the overhead field. As an example, you could use the following RuntimeClass definition with a virtualizing container runtime that uses around 120MiB per Pod for the virtual machine and the guest OS:

    Workloads which are created which specify the kata-fc RuntimeClass handler will take the memory and cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup sizing.

    Consider running the given example workload, test-pod:

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: test-pod
    5. spec:
    6. runtimeClassName: kata-fc
    7. containers:
    8. - name: busybox-ctr
    9. stdin: true
    10. tty: true
    11. limits:
    12. cpu: 500m
    13. memory: 100Mi
    14. - name: nginx-ctr
    15. image: nginx
    16. resources:
    17. limits:
    18. cpu: 1500m
    19. memory: 100Mi

    At admission time the RuntimeClass admission controller updates the workload’s PodSpec to include the overhead as described in the RuntimeClass. If the PodSpec already has this field defined, the Pod will be rejected. In the given example, since only the RuntimeClass name is specified, the admission controller mutates the Pod to include an overhead.

    1. kubectl get pod test-pod -o jsonpath='{.spec.overhead}'

    The output is:

      If a ResourceQuota is defined, the sum of container requests as well as the overhead field are counted.

      When the kube-scheduler is deciding which node should run a new Pod, the scheduler considers that Pod’s overhead as well as the sum of container requests for that Pod. For this example, the scheduler adds the requests and the overhead, then looks for a node that has 2.25 CPU and 320 MiB of memory available.

      Once a Pod is scheduled to a node, the kubelet on that node creates a new for the Pod. It is within this pod that the underlying container runtime will create containers.

      If the resource has a limit defined for each container (Guaranteed QoS or Bustrable QoS with limits defined), the kubelet will set an upper limit for the pod cgroup associated with that resource (cpu.cfs_quota_us for CPU and memory.limit_in_bytes memory). This upper limit is based on the sum of the container limits plus the overhead defined in the PodSpec.

      For CPU, if the Pod is Guaranteed or Burstable QoS, the kubelet will set based on the sum of container requests plus the overhead defined in the PodSpec.

      Looking at our example, verify the container requests for the workload:

      The total container requests are 2000m CPU and 200MiB of memory:

      1. map[cpu: 500m memory:100Mi] map[cpu:1500m memory:100Mi]
      1. kubectl describe node | grep test-pod -B2

      The output shows 2250m CPU and 320MiB of memory are requested, which includes PodOverhead:

      1. Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
      2. --------- ---- ------------ ---------- --------------- ------------- ---
      3. default test-pod 2250m (56%) 2250m (56%) 320Mi (1%) 320Mi (1%) 36m

      Check the Pod’s memory cgroups on the node where the workload is running. In the following example, crictl is used on the node, which provides a CLI for CRI-compatible container runtimes. This is an advanced example to show PodOverhead behavior, and it is not expected that users should need to check cgroups directly on the node.

      First, on the particular node, determine the Pod identifier:

      From this, you can determine the cgroup path for the Pod:

      1. # Run this on the node where the Pod is scheduled
      2. sudo crictl inspectp -o=json $POD_ID | grep cgroupsPath

      The resulting cgroup path includes the Pod’s pause container. The Pod level cgroup is one directory above.

      1. "cgroupsPath": "/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/7ccf55aee35dd16aca4189c952d83487297f3cd760f1bbf09620e206e7d0c27a"

      In this specific case, the pod cgroup path is kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2. Verify the Pod level cgroup setting for memory:

      1. # Run this on the node where the Pod is scheduled.
      2. # Also, change the name of the cgroup to match the cgroup allocated for your pod.
      3. cat /sys/fs/cgroup/memory/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/memory.limit_in_bytes

      This is 320 MiB, as expected:

      A kube_pod_overhead metric is available in to help identify when PodOverhead is being utilized and to help observe stability of workloads running with a defined Overhead. This functionality is not available in the 1.9 release of kube-state-metrics, but is expected in a following release. Users will need to build kube-state-metrics from source in the meantime.