Recommended host practices

    The OKD node configuration file contains important options. For example, two parameters control the maximum number of pods that can be scheduled to a node: and maxPods.

    When both options are in use, the lower of the two values limits the number of pods on a node. Exceeding these values can result in:

    • Increased CPU utilization.

    • Slow pod scheduling.

    • Potential out-of-memory scenarios, depending on the amount of memory in the node.

    • Exhausting the pool of IP addresses.

    • Resource overcommitting, leading to poor user application performance.

    In Kubernetes, a pod that is holding a single container actually uses two containers. The second container is used to set up networking prior to the actual container starting. Therefore, a system running 10 pods will actually have 20 containers running.

    Disk IOPS throttling from the cloud provider might have an impact on CRI-O and kubelet. They might get overloaded when there are large number of I/O intensive pods running on the nodes. It is recommended that you monitor the disk I/O on the nodes and use volumes with sufficient throughput for the workload.

    podsPerCore sets the number of pods the node can run based on the number of processor cores on the node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40.

    Setting podsPerCore to 0 disables this limit. The default is 0. podsPerCore cannot exceed maxPods.

    maxPods sets the number of pods the node can run to a fixed value, regardless of the properties of the node.

    1. kubeletConfig:
    2. maxPods: 250

    Creating a KubeletConfig CRD to edit kubelet parameters

    The kubelet configuration is currently serialized as an Ignition configuration, so it can be directly edited. However, there is also a new kubelet-config-controller added to the Machine Config Controller (MCC). This lets you use a KubeletConfig custom resource (CR) to edit the kubelet parameters.

    As the fields in the kubeletConfig object are passed directly to the kubelet from upstream Kubernetes, the kubelet validates those values directly. Invalid values in the kubeletConfig object might cause cluster nodes to become unavailable. For valid values, see the .

    Consider the following guidance:

    • Create one KubeletConfig CR for each machine config pool with all the config changes you want for that pool. If you are applying the same content to all of the pools, you need only one KubeletConfig CR for all of the pools.

    • Edit an existing KubeletConfig CR to modify existing settings or add new settings, instead of creating a CR for each change. It is recommended that you create a CR only to modify a different machine config pool, or for changes that are intended to be temporary, so that you can revert the changes.

    • As needed, create multiple KubeletConfig CRs with a limit of 10 per cluster. For the first KubeletConfig CR, the Machine Config Operator (MCO) creates a machine config appended with kubelet. With each subsequent CR, the controller creates another kubelet machine config with a numeric suffix. For example, if you have a kubelet machine config with a -2 suffix, the next kubelet machine config is appended with -3.

    If you want to delete the machine configs, delete them in reverse order to avoid exceeding the limit. For example, you delete the kubelet-3 machine config before deleting the kubelet-2 machine config.

    If you have a machine config with a kubelet-9 suffix, and you create another KubeletConfig CR, a new machine config is not created, even if there are fewer than 10 kubelet machine configs.

    Example KubeletConfig CR

    1. $ oc get kubeletconfig
    1. NAME AGE
    2. set-max-pods 15m

    Example showing a KubeletConfig machine config

    1. $ oc get mc | grep kubelet
    1. ...
    2. 99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m
    3. ...

    The following procedure is an example to show how to configure the maximum number of pods per node on the worker nodes.

    Prerequisites

    1. Obtain the label associated with the static MachineConfigPool CR for the type of node you want to configure. Perform one of the following steps:

      1. View the machine config pool:

        1. $ oc describe machineconfigpool <name>

        For example:

        1. $ oc describe machineconfigpool worker

        Example output

        1. apiVersion: machineconfiguration.openshift.io/v1
        2. kind: MachineConfigPool
        3. metadata:
        4. creationTimestamp: 2019-02-08T14:52:39Z
        5. generation: 1
        6. labels:
        7. custom-kubelet: set-max-pods (1)
        1If a label has been added it appears under labels.
      2. If the label is not present, add a key/value pair:

        1. $ oc label machineconfigpool worker custom-kubelet=set-max-pods

    Procedure

    1. View the available machine configuration objects that you can select:

      1. $ oc get machineconfig

      By default, the two kubelet-related configs are 01-master-kubelet and 01-worker-kubelet.

    2. Check the current value for the maximum pods per node:

      1. $ oc describe node <node_name>

      For example:

      1. $ oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94

      Look for value: pods: <value> in the Allocatable stanza:

      Example output

      1. Allocatable:
      2. attachable-volumes-aws-ebs: 25
      3. cpu: 3500m
      4. hugepages-1Gi: 0
      5. hugepages-2Mi: 0
      6. memory: 15341844Ki
      7. pods: 250
    3. Set the maximum pods per node on the worker nodes by creating a custom resource file that contains the kubelet configuration:

      1. apiVersion: machineconfiguration.openshift.io/v1
      2. kind: KubeletConfig
      3. metadata:
      4. name: set-max-pods
      5. spec:
      6. machineConfigPoolSelector:
      7. matchLabels:
      8. custom-kubelet: set-max-pods (1)
      9. kubeletConfig:
      10. maxPods: 500 (2)

      The rate at which the kubelet talks to the API server depends on queries per second (QPS) and burst values. The default values, 50 for kubeAPIQPS and 100 for kubeAPIBurst, are sufficient if there are limited pods running on each node. It is recommended to update the kubelet QPS and burst rates if there are enough CPU and memory resources on the node.

      1. apiVersion: machineconfiguration.openshift.io/v1
      2. kind: KubeletConfig
      3. metadata:
      4. name: set-max-pods
      5. spec:
      6. machineConfigPoolSelector:
      7. matchLabels:
      8. custom-kubelet: set-max-pods
      9. kubeletConfig:
      10. maxPods: <pod_count>
      11. kubeAPIBurst: <burst_rate>
      12. kubeAPIQPS: <QPS>
      1. Update the machine config pool for workers with the label:

        1. $ oc label machineconfigpool worker custom-kubelet=large-pods
      2. Create the KubeletConfig object:

        1. $ oc create -f change-maxPods-cr.yaml
      3. Verify that the KubeletConfig object is created:

        1. $ oc get kubeletconfig

        Example output

        Depending on the number of worker nodes in the cluster, wait for the worker nodes to be rebooted one by one. For a cluster with 3 worker nodes, this could take about 10 to 15 minutes.

    4. Verify that the changes are applied to the node:

      1. Check on a worker node that the maxPods value changed:

        1. $ oc describe node <node_name>
      2. Locate the Allocatable stanza:

        1. ...
        2. Allocatable:
        3. attachable-volumes-gce-pd: 127
        4. ephemeral-storage: 123201474766
        5. hugepages-1Gi: 0
        6. hugepages-2Mi: 0
        7. memory: 14225400Ki
        8. pods: 500 (1)
        9. ...
        1In this example, the pods parameter should report the value you set in the KubeletConfig object.
    5. Verify the change in the KubeletConfig object:

      1. $ oc get kubeletconfigs set-max-pods -o yaml

      This should show a status: "True" and type:Success:

      1. spec:
      2. kubeletConfig:
      3. maxPods: 500
      4. machineConfigPoolSelector:
      5. custom-kubelet: set-max-pods
      6. status:
      7. conditions:
      8. - lastTransitionTime: "2021-06-30T17:04:07Z"
      9. message: Success
      10. status: "True"
      11. type: Success

    Modifying the number of unavailable worker nodes

    By default, only one machine is allowed to be unavailable when applying the kubelet-related configuration to the available worker nodes. For a large cluster, it can take a long time for the configuration change to be reflected. At any time, you can adjust the number of machines that are updating to speed up the process.

    Procedure

    1. Edit the worker machine config pool:

      1. $ oc edit machineconfigpool worker
    2. Set maxUnavailable to the value that you want:

      1. spec:
      2. maxUnavailable: <node_count>

      When setting the value, consider the number of worker nodes that can be unavailable without affecting the applications running on the cluster.

    Control plane node sizing

    The control plane node resource requirements depend on the number of nodes in the cluster. The following control plane node size recommendations are based on the results of control plane density focused testing. The control plane tests create the following objects across the cluster in each of the namespaces depending on the node counts:

    • 12 image streams

    • 3 build configurations

    • 6 builds

    • 1 deployment with 2 pod replicas mounting two secrets each

    • 3 services pointing to the previous deployments

    • 3 routes pointing to the previous deployments

    • 10 secrets, 2 of which are mounted by the previous deployments

    • 10 config maps, 2 of which are mounted by the previous deployments

    Number of worker nodesCluster load (namespaces)CPU coresMemory (GB)

    25

    500

    4

    16

    100

    1000

    8

    32

    250

    4000

    16

    96

    On a large and dense cluster with three masters or control plane nodes, the CPU and memory usage will spike up when one of the nodes is stopped, rebooted or fails. The failures can be due to unexpected issues with power, network or underlying infrastructure in addition to intentional cases where the cluster is restarted after shutting it down to save costs. The remaining two control plane (also known as master) nodes must handle the load in order to be highly available which leads to increase in the resource usage. This is also expected during upgrades because the masters are cordoned, drained, and rebooted serially to apply the operating system updates, as well as the control plane Operators update. To avoid cascading failures, keep the overall resource usage on the control plane nodes (also known as the master nodes) to at most half of all available capacity to handle the resource usage spikes. Increase the CPU and memory on the control plane nodes accordingly to avoid potential downtime due to lack of resources.

    The node sizing varies depending on the number of nodes and object counts in the cluster. It also depends on whether the objects are actively being created on the cluster. During object creation, the control plane is more active in terms of resource usage compared to when the objects are in the running phase.

    The recommendations are based on the data points captured on OKD clusters with OpenShiftSDN as the network plug-in.

    In OKD 4.7, half of a CPU core (500 millicore) is now reserved by the system by default compared to OKD 3.11 and previous versions. The sizes are determined taking that into consideration.

    For large and dense clusters, etcd can suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentation, must be performed to free up space in the data store. It is highly recommended that you monitor Prometheus for etcd metrics and defragment it when required before etcd raises a cluster-wide alarm that puts the cluster into a maintenance mode, which only accepts key reads and deletes. Some of the key metrics to monitor are etcd_server_quota_backend_bytes which is the current quota limit, etcd_mvcc_db_total_size_in_use_in_bytes which indicates the actual database usage after a history compaction, and etcd_debugging_mvcc_db_total_size_in_bytes which shows the database size including free space waiting for defragmentation. Instructions on defragging etcd can be found in the Defragmenting etcd data section.

    Etcd writes data to disk, so its performance strongly depends on disk performance. Etcd persists proposals on disk. Slow disks and disk activity from other processes might cause long fsync latencies, causing etcd to miss heartbeats, inability to commit new proposals to the disk on time, which can cause request timeouts and temporary leader loss. It is highly recommended to run etcd on machines backed by SSD/NVMe disks with low latency and high throughput.

    Some of the key metrics to monitor on a deployed OKD cluster are p99 of etcd disk write ahead log duration and the number of etcd leader changes. Use Prometheus to track these metrics. etcd_disk_wal_fsync_duration_seconds_bucket reports the etcd disk fsync duration, etcd_server_leader_changes_seen_total reports the leader changes. To rule out a slow disk and confirm that the disk is reasonably fast, 99th percentile of the etcd_disk_wal_fsync_duration_seconds_bucket should be less than 10ms.

    Fio, a I/O benchmarking tool can be used to validate the hardware for etcd before or after creating the OpenShift cluster. Run fio and analyze the results:

    Assuming container runtimes like podman or docker are installed on the machine under test and the path etcd writes the data exists - /var/lib/etcd, run:

    Procedure

    Run the following if using podman:

    1. $ sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/openshift-scale/etcd-perf

    Alternatively, run the following if using docker:

    1. $ sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/openshift-scale/etcd-perf

    The output reports whether the disk is fast enough to host etcd by comparing the 99th percentile of the fsync metric captured from the run to see if it is less than 10ms.

    Etcd replicates the requests among all the members, so its performance strongly depends on network input/output (IO) latency. High network latencies result in etcd heartbeats taking longer than the election timeout, which leads to leader elections that are disruptive to the cluster. A key metric to monitor on a deployed OKD cluster is the 99th percentile of etcd network peer latency on each etcd cluster member. Use Prometheus to track the metric. histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket[2m])) reports the round trip time for etcd to finish replicating the client requests between the members; it should be less than 50 ms.

    Defragmenting etcd data

    Manual defragmentation must be performed periodically to reclaim disk space after etcd history compaction and other events cause disk fragmentation.

    History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system.

    Because etcd writes data to disk, its performance strongly depends on disk performance. Consider defragmenting etcd every month, twice a month, or as needed for your cluster. You can also monitor the etcd_db_total_size_in_bytes metric to determine whether defragmentation is necessary.

    Defragmenting etcd is a blocking action. The etcd member will not response until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover.

    Follow this procedure to defragment etcd data on each etcd member.

    Prerequisites

    • You have access to the cluster as a user with the cluster-admin role.

    Procedure

    1. Determine which etcd member is the leader, because the leader should be defragmented last.

      1. Get the list of etcd pods:

        1. $ oc get pods -n openshift-etcd -o wide | grep -v quorum-guard | grep etcd

        Example output

        1. etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none>
        2. etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none>
        3. etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>
      2. Choose a pod and run the following command to determine which etcd member is the leader:

        1. $ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.us-west-1.compute.internal etcdctl endpoint status --cluster -w table

        Example output

        1. Defaulting container name to etcdctl.
        2. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod.
        3. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
        4. | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
        5. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
        6. | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | |
        7. | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | |
        8. | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | |
        9. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

        Based on the IS LEADER column of this output, the https://10.0.199.170:2379 endpoint is the leader. Matching this endpoint with the output of the previous step, the pod name of the leader is etcd-ip-10-0-199-170.example.redhat.com.

    2. Defragment an etcd member.

      1. Connect to the running etcd container, passing in the name of a pod that is not the leader:

        1. $ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com
      2. Unset the ETCDCTL_ENDPOINTS environment variable:

        1. sh-4.4# unset ETCDCTL_ENDPOINTS
      3. Defragment the etcd member:

        1. sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag

        Example output

        1. Finished defragmenting etcd member[https://localhost:2379]

        If a timeout error occurs, increase the value for --command-timeout until the command succeeds.

      4. Verify that the database size was reduced:

        1. sh-4.4# etcdctl endpoint status -w table --cluster

        Example output

        1. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
        2. | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
        3. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
        4. | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | |
        5. | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 41 MB | false | false | 7 | 91624 | 91624 | | (1)
        6. | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | |
        7. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

        This example shows that the database size for this etcd member is now 41 MB as opposed to the starting size of 104 MB.

      5. Repeat these steps to connect to each of the other etcd members and defragment them. Always defragment the leader last.

        Wait at least one minute between defragmentation actions to allow the etcd pod to recover. Until the etcd pod recovers, the etcd member will not respond.

    3. If any NOSPACE alarms were triggered due to the space quota being exceeded, clear them.

      1. Check if there are any NOSPACE alarms:

        Example output

        1. memberID:12345678912345678912 alarm:NOSPACE
      2. Clear the alarms:

        1. sh-4.4# etcdctl alarm disarm

    OKD infrastructure components

    • Kubernetes and OKD control plane services that run on masters

    • The default router

    • The integrated container image registry

    • The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects

    • Cluster aggregated logging

    • Service brokers

    • Red Hat Quay

    • Red Hat OpenShift Container Storage

    • Red Hat Advanced Cluster Manager

    Any node that runs any other container, pod, or component is a worker node that your subscription must cover.

    Moving the monitoring solution

    By default, the Prometheus Cluster Monitoring stack, which contains Prometheus, Grafana, and AlertManager, is deployed to provide cluster monitoring. It is managed by the Cluster Monitoring Operator. To move its components to different machines, you create and apply a custom config map.

    Procedure

    1. Save the following ConfigMap definition as the cluster-monitoring-configmap.yaml file:

      1. apiVersion: v1
      2. kind: ConfigMap
      3. metadata:
      4. name: cluster-monitoring-config
      5. namespace: openshift-monitoring
      6. data:
      7. config.yaml: |+
      8. alertmanagerMain:
      9. nodeSelector:
      10. node-role.kubernetes.io/infra: ""
      11. prometheusK8s:
      12. nodeSelector:
      13. node-role.kubernetes.io/infra: ""
      14. prometheusOperator:
      15. nodeSelector:
      16. node-role.kubernetes.io/infra: ""
      17. grafana:
      18. nodeSelector:
      19. node-role.kubernetes.io/infra: ""
      20. k8sPrometheusAdapter:
      21. nodeSelector:
      22. node-role.kubernetes.io/infra: ""
      23. kubeStateMetrics:
      24. nodeSelector:
      25. node-role.kubernetes.io/infra: ""
      26. telemeterClient:
      27. nodeSelector:
      28. openshiftStateMetrics:
      29. node-role.kubernetes.io/infra: ""
      30. thanosQuerier:
      31. nodeSelector:
      32. node-role.kubernetes.io/infra: ""

      Running this config map forces the components of the monitoring stack to redeploy to infrastructure nodes.

    2. Apply the new config map:

      1. $ oc create -f cluster-monitoring-configmap.yaml
    3. Watch the monitoring pods move to the new machines:

      1. $ watch 'oc get pod -n openshift-monitoring -o wide'
    4. If a component has not moved to the infra node, delete the pod with this component:

      1. $ oc delete pod -n openshift-monitoring <pod>

      The component from the deleted pod is re-created on the infra node.

    You configure the registry Operator to deploy its pods to different nodes.

    Prerequisites

    • Configure additional machine sets in your OKD cluster.

    Procedure

    1. View the config/instance object:

      1. $ oc get configs.imageregistry.operator.openshift.io/cluster -o yaml

      Example output

      1. apiVersion: imageregistry.operator.openshift.io/v1
      2. kind: Config
      3. metadata:
      4. creationTimestamp: 2019-02-05T13:52:05Z
      5. finalizers:
      6. - imageregistry.operator.openshift.io/finalizer
      7. generation: 1
      8. name: cluster
      9. resourceVersion: "56174"
      10. selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
      11. uid: 36fd3724-294d-11e9-a524-12ffeee2931b
      12. spec:
      13. httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623
      14. logging: 2
      15. managementState: Managed
      16. proxy: {}
      17. replicas: 1
      18. requests:
      19. read: {}
      20. write: {}
      21. storage:
      22. s3:
      23. bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c
      24. region: us-east-1
      25. status:
      26. ...
    2. Edit the config/instance object:

      1. $ oc edit configs.imageregistry.operator.openshift.io/cluster
    3. Modify the spec section of the object to resemble the following YAML:

      1. spec:
      2. affinity:
      3. podAntiAffinity:
      4. preferredDuringSchedulingIgnoredDuringExecution:
      5. - podAffinityTerm:
      6. namespaces:
      7. - openshift-image-registry
      8. topologyKey: kubernetes.io/hostname
      9. weight: 100
      10. logLevel: Normal
      11. managementState: Managed
      12. nodeSelector:
      13. node-role.kubernetes.io/infra: ""
    4. Verify the registry pod has been moved to the infrastructure node.

      1. Run the following command to identify the node where the registry pod is located:

        1. $ oc get pods -o wide -n openshift-image-registry
      2. Confirm the node has the label you specified:

        1. $ oc describe node <node_name>

        Review the command output and confirm that node-role.kubernetes.io/infra is in the LABELS list.

    Moving the router

    You can deploy the router pod to a different machine set. By default, the pod is deployed to a worker node.

    Prerequisites

    • Configure additional machine sets in your OKD cluster.

    Procedure

    1. View the IngressController custom resource for the router Operator:

      1. $ oc get ingresscontroller default -n openshift-ingress-operator -o yaml

      The command output resembles the following text:

      1. apiVersion: operator.openshift.io/v1
      2. kind: IngressController
      3. metadata:
      4. creationTimestamp: 2019-04-18T12:35:39Z
      5. finalizers:
      6. - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller
      7. generation: 1
      8. name: default
      9. namespace: openshift-ingress-operator
      10. resourceVersion: "11341"
      11. selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default
      12. uid: 79509e05-61d6-11e9-bc55-02ce4781844a
      13. spec: {}
      14. status:
      15. availableReplicas: 2
      16. conditions:
      17. - lastTransitionTime: 2019-04-18T12:36:15Z
      18. status: "True"
      19. type: Available
      20. domain: apps.<cluster>.example.com
      21. endpointPublishingStrategy:
      22. type: LoadBalancerService
      23. selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default
    2. Edit the ingresscontroller resource and change the nodeSelector to use the infra label:

      1. $ oc edit ingresscontroller default -n openshift-ingress-operator

      Add the nodeSelector stanza that references the infra label to the spec section, as shown:

      1. spec:
      2. nodePlacement:
      3. nodeSelector:
      4. matchLabels:
      5. node-role.kubernetes.io/infra: ""
    3. Confirm that the router pod is running on the infra node.

      1. View the list of router pods and note the node name of the running pod:

        1. $ oc get pod -n openshift-ingress -o wide

        Example output

        1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
        2. router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none>
        3. router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>

        In this example, the running pod is on the ip-10-0-217-226.ec2.internal node.

      2. View the node status of the running pod:

        1Specify the <node_name> that you obtained from the pod list.

        Example output

        1. NAME STATUS ROLES AGE VERSION
        2. ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.20.0

        Because the role list includes infra, the pod is running on the correct node.

    Infrastructure node sizing

    The infrastructure node resource requirements depend on the cluster age, nodes, and objects in the cluster, as these factors can lead to an increase in the number of metrics or time series in Prometheus. The following infrastructure node size recommendations are based on the results of cluster maximums and control plane density focused testing.

    Number of worker nodesCPU coresMemory (GB)

    25

    4

    16

    100

    8

    32

    250

    16

    128

    500

    32

    128

    In OKD 4.7, half of a CPU core (500 millicore) is now reserved by the system by default compared to OKD 3.11 and previous versions. This influences the stated sizing recommendations.

    Additional resources