Pod Autoscaling

    You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see

    A horizontal pod autoscaler, defined by a object, specifies how the system should automatically increase or decrease the scale of a replication controller or deployment configuration, based on metrics collected from the pods that belong to that replication controller or deployment configuration.

    Requirements for Using Horizontal Pod Autoscalers

    In order to use horizontal pod autoscalers, your cluster administrator must have .

    The following metrics are supported by horizontal pod autoscalers:

    Autoscaling

    You can create a horizontal pod autoscaler with the oc autoscale command and specify the minimum and maximum number of pods you want to run, as well as the or memory utilization your pods should target.

    Autoscaling for Memory Utilization is a Technology Preview feature only.

    After a horizontal pod autoscaler is created, it begins attempting to query Heapster for metrics on the pods. It may take one to two minutes before Heapster obtains the initial metrics.

    After metrics are available in Heapster, the horizontal pod autoscaler computes the ratio of the current metric utilization with the desired metric utilization, and scales up or down accordingly. The scaling will occur at a regular interval, but it may take one to two minutes before metrics make their way into Heapster.

    For replication controllers, this scaling corresponds directly to the replicas of the replication controller. For deployment configurations, scaling corresponds directly to the replica count of the deployment configuration. Note that autoscaling applies only to the latest deployment in the Complete phase.

    OKD automatically accounts for resources and prevents unnecessary autoscaling during resource spikes, such as during start up. Pods in the unready state have 0 CPU usage when scaling up and the autoscaler ignores the pods when scaling down. Pods without known metrics have 0% CPU usage when scaling up and 100% CPU when scaling down. This allows for more stability during the HPA decision. To use this feature, you must configure to determine if a new pod is ready for use.

    Use the oc autoscale command and specify at least the maximum number of pods you want to run at any given time. You can optionally specify the minimum number of pods and the average CPU utilization your pods should target, otherwise those are given default values from the OKD server.

    For example:

    Example 1. Horizontal Pod Autoscaler Object Definition

    1. apiVersion: autoscaling/v1
    2. kind: HorizontalPodAutoscaler
    3. metadata:
    4. name: frontend (1)
    5. spec:
    6. scaleTargetRef:
    7. kind: DeploymentConfig (2)
    8. name: frontend (3)
    9. apiVersion: apps/v1 (4)
    10. subresource: scale
    11. minReplicas: 1 (5)
    12. maxReplicas: 10 (6)
    13. targetCPUUtilizationPercentage: 80 (7)
    1The name of this horizontal pod autoscaler object
    2The kind of object to scale
    3The name of the object to scale
    4The API version of the object to scale
    5The minimum number of replicas to which to scale down
    6The maximum number of replicas to which to scale up
    7The percentage of the requested CPU that each pod should ideally be using

    Alternatively, the oc autoscale command creates a horizontal pod autoscaler with the following definition when using the v2beta1 version of the horizontal pod autoscaler:

    1. apiVersion: autoscaling/v2beta1
    2. kind: HorizontalPodAutoscaler
    3. metadata:
    4. name: hpa-resource-metrics-cpu (1)
    5. spec:
    6. scaleTargetRef:
    7. apiVersion: apps/v1 (2)
    8. kind: ReplicationController (3)
    9. name: hello-hpa-cpu (4)
    10. minReplicas: 1 (5)
    11. maxReplicas: 10 (6)
    12. metrics:
    13. - type: Resource
    14. resource:
    15. name: cpu

    Autoscaling for Memory Utilization

    Autoscaling for Memory Utilization is a Technology Preview feature only.

    Unlike CPU-based autoscaling, memory-based autoscaling requires specifying the autoscaler using YAML instead of using the oc autoscale command. Optionally, you can specify the minimum number of pods and the average memory utilization your pods should target as well, otherwise those are given default values from the OKD server.

    1. Memory-based autoscaling is only available with the v2beta1 version of the autoscaling API. Enable memory-based autoscaling by adding the following to your cluster’s master-config.yaml file:

      1. apiServerArguments:
      2. runtime-config:
      3. - apis/autoscaling/v2beta1=true
      4. ...
    2. Place the following in a file, such as hpa.yaml:

      1The name of this horizontal pod autoscaler object
      2The API version of the object to scale
      3The kind of object to scale
      4The name of the object to scale
      5The minimum number of replicas to which to scale down
      6The maximum number of replicas to which to scale up
      7The average percentage of the requested memory that each pod should be using
    3. Then, create the autoscaler from the above file:

      1. $ oc create -f hpa.yaml

    To view the status of a horizontal pod autoscaler:

    • Use the oc get command to view information on the CPU utilization and pod limits:

      1. $ oc get hpa/hpa-resource-metrics-cpu
      2. NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
      3. hpa-resource-metrics-cpu DeploymentConfig/default/frontend/scale 80% 79% 1 10 8d

      The output includes the following:

      • Target. The targeted average CPU utilization across all pods controlled by the deployment configuration.

      • Current. The current CPU utilization across all pods controlled by the deployment configuration.

      • Minpods/Maxpods. The minimum and maximum number of replicas that can be set by the autoscaler.

    • Use the oc describe command for detailed information on the horizontal pod autoscaler object.

      1. $ oc describe hpa/hpa-resource-metrics-cpu
      2. Name: hpa-resource-metrics-cpu
      3. Namespace: default
      4. Labels: <none>
      5. CreationTimestamp: Mon, 26 Oct 2015 21:13:47 -0400
      6. Reference: DeploymentConfig/default/frontend/scale
      7. Target CPU utilization: 80% (1)
      8. Current CPU utilization: 79% (2)
      9. Min replicas: 1 (3)
      10. Max replicas: 4 (4)
      11. ReplicationController pods: 1 current / 1 desired
      12. Conditions: (5)
      13. Type Status Reason Message
      14. ---- ------ ------ -------
      15. AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale
      16. ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_requests
      17. ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range
      18. Events:
      1The average percentage of the requested memory that each pod should be using.
      2The current CPU utilization across all pods controlled by the deployment configuration.
      3The minimum number of replicas to scale down to.
      4The maximum number of replicas to scale up to.
      5If the object used the v2alpha1 API, are displayed.

    The horizontal pod autoscaler status conditions are available with the v2beta1 version of the autoscaling API:

    The following status conditions are set:

    • AbleToScale indicates whether the horizontal pod autoscaler is able to fetch and update scales, and whether any backoff conditions are preventing scaling.

      • A True condition indicates scaling is allowed.

      • A False condition indicates scaling is not allowed for the reason specified.

    • indicates that autoscaling is not allowed because a maximum or minimum replica count was reached.

      • A True condition indicates that you need to raise or lower the minimum or maximum replica count in order to scale.

      • A False condition indicates that the requested scaling is allowed.

    If you need to add or edit this line, restart the OKD services:

    1. # master-restart api
    2. # master-restart controllers

    To see the conditions affecting a horizontal pod autoscaler, use oc describe hpa. Conditions appear in the status.conditions field:

    1. $ oc describe hpa cm-test
    2. Name: cm-test
    3. Namespace: prom
    4. Labels: <none>
    5. Annotations: <none>
    6. CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000
    7. Reference: ReplicationController/cm-test
    8. Metrics: ( current / target )
    9. "http_requests" on pods: 66m / 500m
    10. Min replicas: 1
    11. Max replicas: 4
    12. ReplicationController pods: 1 current / 1 desired
    13. Conditions: (1)
    14. Type Status Reason Message
    15. ---- ------ ------ -------
    16. AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale
    17. ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request
    18. ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range
    19. Events:
    1The horizontal pod autoscaler status messages.
    • The AbleToScale condition indicates whether HPA is able to fetch and update scales, as well as whether any backoff-related conditions would prevent scaling.

    • The ScalingActive condition indicates whether the HPA is enabled (for example, the replica count of the target is not zero) and is able to calculate desired scales. AFalse status generally indicates problems with fetching metrics.

    • The ScalingLimited condition indicates that the desired scale was capped by the maximum or minimum of the horizontal pod autoscaler. A True status generally indicates that you might need to raise or lower the minimum or maximum replica count constraints on your horizontal pod autoscaler.

    The following is an example of a pod that is unable to scale:

    1. Conditions:
    2. Type Status Reason Message
    3. ---- ------ ------ -------
    4. AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: replicationcontrollers/scale.extensions "hello-hpa-cpu" not found

    The following is an example of a pod that could not obtain the needed metrics for scaling:

    1. Conditions:
    2. Type Status Reason Message
    3. ---- ------ ------ -------
    4. AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale
    5. ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request
    6. Events: