Automatically scaling pods based on custom metrics

    The custom metrics autoscaler is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

    For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

    The custom metrics autoscaler uses the Kubernetes-based Event Driven Autoscaler (KEDA) and is built on top of the OKD horizontal pod autoscaler (HPA).

    The Custom Metrics Autoscaler Operator scales your pods up and down based on custom, external metrics from specific applications. Your other applications continue to use other scaling methods. You configure triggers, also known as scalers, which are the source of events and metrics that the custom metrics autoscaler uses to determine how to scale. The custom metrics autoscaler uses a metrics API to convert the external metrics to a form that OKD can use. The custom metrics autoscaler creates a horizontal pod autoscaler (HPA) that performs the actual scaling. The custom metrics autoscaler currently supports only the Prometheus trigger, which can use the installed OKD monitoring or an external Prometheus server as the metrics source.

    To use the custom metrics autoscaler, you create a or ScaledJob object, which defines the scaling metadata. You specify the deployment or job to scale, the source of the metrics to scale on (trigger), and other parameters such as the minimum and maximum replica counts allowed.

    You can create only one scaled object or scaled job for each workload that you want to scale. Also, you cannot use a scaled object or scaled job and the horizontal pod autoscaler (HPA) on the same workload.

    You can verify that the autoscaling has taken place by reviewing the number of pods in your custom resource or by reviewing the Custom Metrics Autoscaler Operator logs for messages similar to the following:

    1. Successfully updated ScaleTarget

    Installing the custom metrics autoscaler

    You can use the OKD web console to install the Custom Metrics Autoscaler Operator.

    The installation creates five CRDs:

    • ClusterTriggerAuthentication

    • KedaController

    • ScaledJob

    • ScaledObject

    • TriggerAuthentication

    Prerequisites

    • Ensure that you have downloaded the as shown in Obtaining the installation program in the installation documentation for your platform.

      If you have the pull secret, add the redhat-operators catalog to the OperatorHub custom resource (CR) as shown in Configuring OKD to use Red Hat Operators.

    • If you use the community KEDA:

      • Uninstall the community KEDA. You cannot run both KEDA and the custom metrics autoscaler on the same OKD cluster.

      • Remove the KEDA 1.x custom resource definitions by running the following commands:

        1. $ oc delete crd scaledobjects.keda.k8s.io
        1. $ oc delete crd triggerauthentications.keda.k8s.io

    Procedure

    1. In the OKD web console, click OperatorsOperatorHub.

    2. Choose Custom Metrics Autoscaler from the list of available Operators, and click Install.

    3. On the Install Operator page, ensure that the All namespaces on the cluster (default) option is selected for Installation Mode. This installs the Operator in all namespaces.

    4. Ensure that the openshift-keda namespace is selected for Installed Namespace. OKD creates the namespace, if not present in your cluster.

    5. Click Install.

    6. Verify the installation by listing the Custom Metrics Autoscaler Operator components:

      1. Navigate to WorkloadsPods.

      2. Select the openshift-keda project from the drop-down menu and verify that the custom-metrics-autoscaler-operator-* pod is running.

      3. Navigate to WorkloadsDeployments to verify that the custom-metrics-autoscaler-operator deployment is running.

    7. Optional: Verify the installation in the OpenShift CLI using the following commands:

      1. $ oc get all -n openshift-keda

      The output appears similar to the following:

      Example output

      1. NAME READY STATUS RESTARTS AGE
      2. pod/custom-metrics-autoscaler-operator-5fd8d9ffd8-xt4xp 1/1 Running 0 18m
      3. NAME READY UP-TO-DATE AVAILABLE AGE
      4. deployment.apps/custom-metrics-autoscaler-operator 1/1 1 1 18m
      5. NAME DESIRED CURRENT READY AGE
      6. replicaset.apps/custom-metrics-autoscaler-operator-5fd8d9ffd8 1 1 1 18m
    8. Install the KedaController custom resource, which creates the required CRDs:

      1. In the OKD web console, click OperatorsInstalled Operators.

      2. Click Custom Metrics Autoscaler.

      3. On the Operator Details page, click the KedaController tab.

      4. On the KedaController tab, click Create KedaController and edit the file.

        1. kind: KedaController
        2. apiVersion: keda.sh/v1alpha1
        3. metadata:
        4. name: keda
        5. namespace: openshift-keda
        6. spec:
        7. watchNamespace: '' (1)
        8. operator:
        9. logLevel: info (2)
        10. logEncoder: console (3)
        11. metricsServer:
        12. logLevel: '0' (4)
        13. serviceAccount: {}
        1Specifies the namespaces that the custom autoscaler should watch. Enter names in a comma-separated list. Omit or set empty to watch all namespaces. The default is empty.
        2Specifies the level of verbosity for the Custom Metrics Autoscaler Operator log messages. The allowed values are debug, info, error. The default is info.
        3Specifies the logging format for the Custom Metrics Autoscaler Operator log messages. The allowed values are console or json. The default is console.
        4Specifies the logging level for the Custom Metrics Autoscaler Metrics Server. The allowed values are 0 for info and 4 or debug. The default is 0.
      5. Click Create to create the KEDAController.

    Triggers, also known as scalers, provide the metrics that the Custom Metrics Autoscaler Operator uses to scale your pods.

    The custom metrics autoscaler currently supports only the Prometheus trigger, which can use the installed OKD monitoring or an external Prometheus server as the metrics source.

    You use a ScaledObject or ScaledJob custom resource to configure triggers for specific objects, as described in the sections that follow.

    Scale applications based on Prometheus metrics. See Additional resources for information on the configurations required to use the OKD monitoring as a source for metrics.

    If Prometheus is taking metrics from the application that the custom metrics autoscaler is scaling, do not set the minimum replicas to 0 in the custom resource. If there are no application pods, the custom metrics autoscaler does not have any metrics to scale on.

    Example scaled object with a Prometheus target

    1. apiVersion: keda.sh/v1alpha1
    2. kind: ScaledObject
    3. metadata:
    4. name: prom-scaledobject
    5. namespace: my-namespace
    6. spec:
    7. ...
    8. triggers:
    9. - type: prometheus (1)
    10. metadata:
    11. serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 (2)
    12. namespace: kedatest (3)
    13. metricName: http_requests_total (4)
    14. threshold: '5' (5)
    15. query: sum(rate(http_requests_total{job="test-app"}[1m])) (6)
    16. authModes: "basic" (7)
    1Specifies Prometheus as the scaler/trigger type.
    2Specifies the address of the Prometheus server. This example uses OKD monitoring.
    3Optional: Specifies the namespace of the object you want to scale. This parameter is mandatory if OKD monitoring as a source for the metrics.
    4Specifies the name to identify the metric in the external.metrics.k8s.io API. If you are using more than one trigger, all metric names must be unique.
    5Specifies the value to start scaling for.
    6Specifies the Prometheus query to use.
    7Specifies the authentication method to use. Prometheus scalers support bearer authentication, basic authentication, or TLS authentication. You configure the specific authentication parameters in a trigger authentication, as discussed in a following section. As needed, you can also use a secret.

    Additional resources

    Understanding custom metrics autoscaler trigger authentications

    A trigger authentication allows you to include authentication information in a scaled object or a scaled job that can be used by the associated containers. You can use trigger authentications to pass OKD secrets, platform-native pod authentication mechanisms, environment variables, and so on.

    You define a TriggerAuthentication object in the same namespace as the object that you want to scale. That trigger authentication can be used only by objects in that namespace.

    Alternatively, to share credentials between objects in multiple namespaces, you can create a ClusterTriggerAuthentication object that can be used across all namespaces.

    Trigger authentications and cluster trigger authentication use the same configuration. However, a cluster trigger authentication requires an additional kind parameter in the authentication reference of the scaled object.

    Example trigger authentication with a secret

    1. kind: TriggerAuthentication
    2. apiVersion: keda.sh/v1alpha1
    3. metadata:
    4. name: secret-triggerauthentication
    5. namespace: my-namespace (1)
    6. spec:
    7. secretTargetRef: (2)
    8. - parameter: user-name (3)
    9. name: my-secret (4)
    10. key: USER_NAME (5)
    11. - parameter: password
    12. name: my-secret
    13. key: PASSWORD

    Example cluster trigger authentication with a secret

    1. kind: ClusterTriggerAuthentication
    2. apiVersion: keda.sh/v1alpha1
    3. metadata: (1)
    4. name: secret-cluster-triggerauthentication
    5. spec:
    6. secretTargetRef: (2)
    7. - parameter: user-name (3)
    8. name: secret-name (4)
    9. key: user-name (5)
    10. - parameter: password
    11. name: secret-name
    12. key: user-name
    1Note that no namespace is used with a cluster trigger authentication.
    2Specifies that this trigger authentication uses a secret for authorization.
    3Specifies the authentication parameter to supply by using the secret.
    4Specifies the name of the secret to use.
    5Specifies the key in the secret to use with the specified parameter.

    Example trigger authentication with a token

    1. kind: TriggerAuthentication
    2. apiVersion: keda.sh/v1alpha1
    3. metadata:
    4. name: token-triggerauthentication
    5. namespace: my-namespace (1)
    6. spec:
    7. secretTargetRef: (2)
    8. - parameter: bearerToken (3)
    9. name: my-token-2vzfq (4)
    10. key: token (5)
    11. - parameter: ca
    12. name: my-token-2vzfq
    13. key: ca.crt
    1Specifies the namespace of the object you want to scale.
    2Specifies that this trigger authentication uses a secret for authorization.
    3Specifies the authentication parameter to supply by using the token.
    4Specifies the name of the token to use.
    5Specifies the key in the token to use with the specified parameter.

    Example trigger authentication with an environment variable

    1. kind: TriggerAuthentication
    2. apiVersion: keda.sh/v1alpha1
    3. metadata:
    4. name: env-var-triggerauthentication
    5. namespace: my-namespace (1)
    6. spec:
    7. env: (2)
    8. - parameter: access_key (3)
    9. name: ACCESS_KEY (4)
    10. containerName: my-container (5)
    1Specifies the namespace of the object you want to scale.
    2Specifies that this trigger authentication uses environment variables for authorization.
    3Specify the parameter to set with this variable.
    4Specify the name of the environment variable.
    5Optional: Specify a container that requires authentication. The container must be in the same resource as referenced by scaleTargetRef in the scaled object.

    Example trigger authentication with pod authentication providers

    1. kind: TriggerAuthentication
    2. apiVersion: keda.sh/v1alpha1
    3. metadata:
    4. spec:
    5. podIdentity: (2)
    6. provider: aws-eks (3)
    1Specifies the namespace of the object you want to scale.
    2Specifies that this trigger authentication uses a platform-native pod authentication method for authorization.
    3Specifies a pod identity. Supported values are none, azure, aws-eks, or aws-kiam. The default is none.

    Additional resources

    You use trigger authentications and cluster trigger authentications by using a custom resource to create the authentication, then add a reference to a scaled object or scaled job.

    Prerequisites

    • The Custom Metrics Autoscaler Operator must be installed.

    • If you are using a secret, the Secret object must exist, for example:

      Example secret

      1. apiVersion: v1
      2. kind: Secret
      3. metadata:
      4. name: my-secret
      5. data:
      6. user-name: <base64_username>
      7. password: <base64_password>
    1. Create the TriggerAuthentication or ClusterTriggerAuthentication object.

      1. Create a YAML file that defines the object:

        Example trigger authentication with a secret

      2. Create the TriggerAuthentication object:

        1. $ oc create -f <file-name>.yaml
    2. Create or edit a ScaledObject YAML file:

      Example scaled object

      1. apiVersion: keda.sh/v1alpha1
      2. kind: ScaledObject
      3. metadata:
      4. name: scaledobject
      5. namespace: my-namespace
      6. spec:
      7. scaleTargetRef:
      8. name: example-deployment
      9. maxReplicaCount: 100
      10. minReplicaCount: 0
      11. pollingInterval: 30
      12. triggers:
      13. - authenticationRef:
      14. type: prometheus
      15. metadata:
      16. serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092
      17. namespace: kedatest # replace <NAMESPACE>
      18. metricName: http_requests_total
      19. threshold: '5'
      20. query: sum(rate(http_requests_total{job="test-app"}[1m]))
      21. authModes: "basic"
      22. - authenticationRef: (1)
      23. name: prom-triggerauthentication
      24. metadata:
      25. name: prom-triggerauthentication
      26. type: object
      27. - authenticationRef: (2)
      28. name: prom-cluster-triggerauthentication
      29. kind: ClusterTriggerAuthentication
      30. metadata:
      31. name: prom-cluster-triggerauthentication
      32. type: object
      1Optional: Specify a trigger authentication.
      2Optional: Specify a cluster trigger authentication. You must include the kind: ClusterTriggerAuthentication parameter.

      It is not necessary to specify both a namespace trigger authentication and a cluster trigger authentication.

    3. Create the object. For example:

      1. $ oc apply -f <file-name>

    You can use the installed OKD monitoring as a source for the metrics used by the custom metrics autoscaler. However, there are some additional configurations you must perform.

    You must perform the following tasks, as described in this section:

    • Create a service account to get a token.

    • Create a role.

    • Add that role to the service account.

    • Reference the token in the trigger authentication object used by Prometheus.

    Prerequisites

    • OKD monitoring must be installed.

    • Monitoring of user-defined workloads must be enabled in OKD monitoring, as described in the Creating a user-defined workload monitoring config map section.

    • The Custom Metrics Autoscaler Operator must be installed.

    Procedure

    1. Change to the project with the object you want to scale:

      1. $ oc project my-project
    2. Use the following command to create a service account, if your cluster does not have one:

      1. $ oc create serviceaccount <service_account>

      where:

      <service_account>

      Specifies the name of the service account.

    3. Use the following command to locate the token assigned to the service account:

      1. $ oc describe serviceaccount <service_account>

      where:

      <service_account>

      Specifies the name of the service account.

      Example output

      1. Name: thanos
      2. Namespace: my-project
      3. Labels: <none>
      4. Annotations: <none>
      5. Image pull secrets: thanos-dockercfg-nnwgj
      6. Mountable secrets: thanos-dockercfg-nnwgj
      7. Tokens: thanos-token-9g4n5 (1)
      8. Events: <none>
      1Use this token in the trigger authentication.
    4. Create a trigger authentication with the service account token:

      1. Create a YAML file similar to the following:

        1. apiVersion: keda.sh/v1alpha1
        2. kind: TriggerAuthentication
        3. metadata:
        4. name: keda-trigger-auth-prometheus
        5. spec:
        6. secretTargetRef: (1)
        7. - parameter: bearerToken (2)
        8. name: thanos-token-9g4n5 (3)
        9. key: token (4)
        10. - parameter: ca
        11. name: thanos-token-9g4n5
        12. key: ca.crt
        1Specifies that this object uses a secret for authorization.
        2Specifies the authentication parameter to supply by using the token.
        3Specifies the name of the token to use.
        4Specifies the key in the token to use with the specified parameter.
      2. Create the CR object:

        1. $ oc create -f <file-name>.yaml
    5. Create a role for reading Thanos metrics:

      1. Create a YAML file with the following parameters:

        1. apiVersion: rbac.authorization.k8s.io/v1
        2. kind: Role
        3. metadata:
        4. name: thanos-metrics-reader
        5. rules:
        6. - apiGroups:
        7. - ""
        8. resources:
        9. - pods
        10. verbs:
        11. - get
        12. - apiGroups:
        13. - metrics.k8s.io
        14. resources:
        15. - pods
        16. - nodes
        17. verbs:
        18. - get
        19. - list
        20. - watch
      2. Create the CR object:

        1. $ oc create -f <file-name>.yaml
    6. Create a role binding for reading Thanos metrics:

      1. Create a YAML file similar to the following:

        1. apiVersion: rbac.authorization.k8s.io/v1
        2. kind: RoleBinding
        3. metadata:
        4. name: thanos-metrics-reader (1)
        5. namespace: my-project (2)
        6. roleRef:
        7. apiGroup: rbac.authorization.k8s.io
        8. kind: Role
        9. name: thanos-metrics-reader
        10. subjects:
        11. - kind: ServiceAccount
        12. name: thanos (3)
        13. namespace: my-project (4)
        1Specifies the name of the role you created.
        2Specifies the namespace of the object you want to scale.
        3Specifies the name of the service account to bind to the role.
        4Specifies the namespace of the object you want to scale.
      2. Create the CR object:

        1. $ oc create -f <file-name>.yaml

    You can now deploy a scaled object or scaled job to enable autoscaling for your application, as described in the following sections. To use OKD monitoring as the source, in the trigger, or scaler, specify the prometheus type and use https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 as the serverAddress.

    Additional resources

    Understanding how to add custom metrics autoscalers

    To add a custom metrics autoscaler, create a ScaledObject custom resource for a deployment, stateful set, or custom resource. Create a ScaledJob custom resource for a job.

    You can create only one scaled object or scaled job for each workload that you want to scale. Also, you cannot use a scaled object or scaled job and the horizontal pod autoscaler (HPA) on the same workload.

    You can create a custom metrics autoscaler for a workload created by a Deployment, StatefulSet, or custom resource` object.

    Prerequisites

    • The Custom Metrics Autoscaler Operator must be installed.

    Procedure

    1. Create a YAML file similar to the following:

      Example scaled object

      1Optional: Specifies the API version of the target resource. The default is apps/v1.
      2Specifies the name of the object that you want to scale.
      3Specifies the as Deployment, StatefulSet or CustomResource.
      4Optional: Specifies the name of the container in the target resource, from which the custom autoscaler gets environment variables holding secrets and so forth. The default is .spec.template.spec.containers[0].
      5Optional. Specifies the period in seconds to wait after the last trigger is reported before scaling the deployment back to 0 if the minReplicaCount is set to 0. The default is 300.
      6Optional: Specifies the maximum number of replicas when scaling up. The default is 100.
      7Optional: Specifies the minimum number of replicas when scaling down.
      8Optional: Specifies the interval in seconds to check each trigger on. The default is 30.
      9Optional: Specifies whether to scale back the target resource to the original replicas count after the scaled object is deleted. The default is false, which keeps the replica count as it is when the scaled object is deleted.
      10Optional: Specifies a scaling policy to use to control the rate to scale pods up or down. For more information, see the link in the “Additional resources” section that follows.
      11Specifies the trigger to use as the basis for scaling, as described in the “Understanding the custom metrics autoscaler triggers” section. This example uses OKD monitoring.
      12Optional: Specifies a trigger authentication, as described in the “Creating a custom metrics autoscaler trigger authentication” section.
      13Optional: Specifies a cluster trigger authentication, as described in the “Creating a custom metrics autoscaler trigger authentication” section.

      It is not necessary to specify both a namespace trigger authentication and a cluster trigger authentication.

    2. Create the custom metrics autoscaler:

      1. $ oc create -f <file-name>.yaml

    Verification

    • View the command output to verify that the custom metrics autoscaler was created:

      1. $ oc get scaledobject <scaled_object_name>

      Example output

      1. NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE
      2. scaledobject apps/v1.Deployment example-deployment 0 50 prometheus prom-triggerauthentication True True True 17s

      Note the following fields in the output:

    • TRIGGERS: Indicates the trigger, or scaler, that is being used.

    • AUTHENTICATION: Indicates the name of any trigger authentication being used.

    • READY: Indicates whether the scaled object is ready to start scaling:

      • If True, the scaled object is ready.

      • If False, the scaled object is not ready because of a problem in one or more of the objects you created.

    • ACTIVE: Indicates whether scaling is taking place:

      • If True, scaling is taking place.

      • If False, scaling is not taking place because there are no metrics or there is a problem in one or more of the objects you created.

      • If False, the custom metrics autoscaler is getting metrics.

      • If True, the custom metrics autoscaler is getting metrics because there are no metrics or there is a problem in one or more of the objects you created.

    Additional resources

    You can create a custom metrics autoscaler for any Job object.

    Prerequisites

    • The Custom Metrics Autoscaler Operator must be installed.

    Procedure

    1. Create a YAML file similar to the following:

      1. kind: ScaledJob
      2. apiVersion: keda.sh/v1alpha1
      3. metadata:
      4. name: scaledjob
      5. namespace: my-namespace
      6. spec:
      7. failedJobsHistoryLimit: 5
      8. jobTargetRef:
      9. activeDeadlineSeconds: 600 (1)
      10. backoffLimit: 6 (2)
      11. parallelism: 1 (3)
      12. completions: 1 (4)
      13. template: (5)
      14. metadata:
      15. name: pi
      16. spec:
      17. containers:
      18. - name: pi
      19. image: perl
      20. command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      21. maxReplicaCount: 100 (6)
      22. pollingInterval: 30 (7)
      23. successfulJobsHistoryLimit: 5 (8)
      24. failedJobsHistoryLimit: 5 (9)
      25. envSourceContainerName: (10)
      26. rolloutStrategy: gradual (11)
      27. scalingStrategy: (12)
      28. strategy: "custom"
      29. customScalingQueueLengthDeduction: 1
      30. customScalingRunningJobPercentage: "0.5"
      31. pendingPodConditions:
      32. - "Ready"
      33. - "PodScheduled"
      34. - "AnyOtherCustomPodCondition"
      35. multipleScalersCalculation : "max"
      36. triggers:
      37. - type: prometheus (13)
      38. metadata:
      39. serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092
      40. namespace: kedatest
      41. metricName: http_requests_total
      42. threshold: '5'
      43. query: sum(rate(http_requests_total{job="test-app"}[1m]))
      44. authModes: "bearer"
      45. - authenticationRef: (14)
      46. name: prom-triggerauthentication
      47. metadata:
      48. name: prom-triggerauthentication
      49. type: object
      50. - authenticationRef: (15)
      51. name: prom-cluster-triggerauthentication
      52. metadata:
      53. name: prom-cluster-triggerauthentication
      54. type: object
      1Specifies the maximum duration the job can run.
      2Specifies the number of retries for a job. The default is 6.
      3Optional: Specifies how many pod replicas a job should run in parallel; defaults to 1.
      • For non-parallel jobs, leave unset. When unset, the default is 1.

      4Optional: Specifies how many successful pod completions are needed to mark a job completed.
      • For non-parallel jobs, leave unset. When unset, the default is 1.

      • For parallel jobs with a fixed completion count, specify the number of completions.

      • For parallel jobs with a work queue, leave unset. When unset the default is the value of the parallelism parameter.

      5Specifies the template for the pod the controller creates.
      6Optional: Specifies the maximum number of replicas when scaling up. The default is 100.
      7Optional: Specifies the interval in seconds to check each trigger on. The default is 30.
      8Optional: Specifies the number of successful finished jobs should be kept. The default is 100.
      9Optional: Specifies how many failed jobs should be kept. The default is 100.
      10Optional: Specifies the name of the container in the target resource, from which the custom autoscaler gets environment variables holding secrets and so forth. The default is .spec.template.spec.containers[0].
      11Optional: Specifies whether existing jobs are terminated whenever a scaled job is being updated:
      • default: The autoscaler terminates an existing job if its associated scaled job is updated. The autoscaler recreates the job with the latest specs.

      • gradual: The autoscaler does not terminate an existing job if its associated scaled job is updated. The autoscaler creates new jobs with the latest specs.

      12Optional: Specifies a scaling strategy: default, custom, or accurate. The default is default. For more information, see the link in the “Additional resources” section that follows.
      13Specifies the trigger to use as the basis for scaling, as described in the “Understanding the custom metrics autoscaler triggers” section.
      14Optional: Specifies a trigger authentication, as described in the “Creating a custom metrics autoscaler trigger authentication” section.
      15Optional: Specifies a cluster trigger authentication, as described in the “Creating a custom metrics autoscaler trigger authentication” section.
    2. Create the custom metrics autoscaler:

      1. $ oc create -f <file-name>.yaml

    Verification

    • View the command output to verify that the custom metrics autoscaler was created:

      1. $ oc get scaledjob <scaled_job_name>

      Example output

      1. NAME MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE
      2. scaledjob 100 prometheus prom-triggerauthentication True True 8s

      Note the following fields in the output:

    • TRIGGERS: Indicates the trigger, or scaler, that is being used.

    • AUTHENTICATION: Indicates the name of any trigger authentication being used.

    • READY: Indicates whether the scaled object is ready to start scaling:

      • If True, the scaled object is ready.

      • If False, the scaled object is not ready because of a problem in one or more of the objects you created.

    • ACTIVE: Indicates whether scaling is taking place:

      • If True, scaling is taking place.

      • If False, scaling is not taking place because there are no metrics or there is a problem in one or more of the objects you created.

    Additional resources

    You can remove the custom metrics autoscaler from your OKD cluster. After removing the Custom Metrics Autoscaler Operator, remove other components associated with the Operator to avoid potential issues.

    You should delete the KedaController custom resource (CR) first. If you do not specifically delete the CR, OKD can hang when you delete the openshift-keda project. If you delete the Custom Metrics Autoscaler Operator before deleting the CR, you are not able to delete the CR.

    Prerequisites

    • The Custom Metrics Autoscaler Operator must be installed.

    Procedure

    1. In the OKD web console, click OperatorsInstalled Operators.

    2. Switch to the openshift-keda project.

    3. Remove the KedaController custom resource.

      1. Find the CustomMetricsAutoscaler Operator and click the KedaController tab.

      2. Find the custom resource, and then click Delete KedaController.

      3. Click Uninstall.

    4. Remove the Custom Metrics Autoscaler Operator:

      1. Click OperatorsInstalled Operators.

      2. Find the CustomMetricsAutoscaler Operator and click the Options menu and select Uninstall Operator.

      3. Click Uninstall.

    5. Optional: Use the OpenShift CLI to remove the custom metrics autoscaler components:

      1. Delete the custom metrics autoscaler CRDs:

        • clustertriggerauthentications.keda.sh

        • kedacontrollers.keda.sh

        • scaledjobs.keda.sh

        • scaledobjects.keda.sh

        • triggerauthentications.keda.sh

        1. $ oc delete crd clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh

        Deleting the CRDs removes the associated roles, cluster roles, and role bindings. However, there might be a few cluster roles that must be manually deleted.

      2. List any custom metrics autoscaler cluster roles:

        1. $ oc get clusterrole | grep keda.sh
      3. Delete the listed custom metrics autoscaler cluster roles. For example:

        1. $ oc delete clusterrole.keda.sh-v1alpha1-admin
      4. List any custom metrics autoscaler cluster role bindings:

        1. $ oc get clusterrolebinding | grep keda.sh
      5. Delete the listed custom metrics autoscaler cluster role bindings. For example:

        1. $ oc delete clusterrolebinding.keda.sh-v1alpha1-admin
    6. Delete the custom metrics autoscaler project:

      1. $ oc delete project openshift-keda