Autoscaling a Dapr app with KEDA

    For Kubernetes, Dapr integrates with KEDA, an event driven autoscaler for Kubernetes. Many of Dapr’s pub/sub components overlap with the scalers provided by so it’s easy to configure your Dapr deployment on Kubernetes to autoscale based on the back pressure using KEDA.

    This how-to walks through the configuration of a scalable Dapr application along with the back pressure on Kafka topic, however you can apply this approach to pub/sub components offered by Dapr.

    To install KEDA, follow the instructions on the KEDA website.

    If you don’t have access to a Kafka service, you can install it into your Kubernetes cluster for this example by using Helm:

    To check on the status of the Kafka deployment:

    1. kubectl rollout status deployment.apps/kafka-cp-ksql-server -n kafka
    2. kubectl rollout status statefulset.apps/kafka-cp-kafka -n kafka

    When done, also deploy the Kafka client and wait until it’s ready:

    1. kubectl -n kafka exec -it kafka-client -- kafka-topics \
    2. --zookeeper kafka-cp-zookeeper-headless:2181 \
    3. --topic demo-topic \
    4. --create \
    5. --partitions 10 \
    6. --replication-factor 3 \
    7. --if-not-exists

    Next, we’ll deploy the Dapr Kafka pub/sub component for Kubernetes. Paste the following YAML into a file named kafka-pubsub.yaml:

    The above YAML defines the pub/sub component that your application subscribes to, the we created above. If you used the Kafka Helm install instructions above you can leave the brokers value as is. Otherwise, change this to the connection string to your Kafka brokers.

    Also notice the autoscaling-subscriber value set for consumerID which is used later to make sure that KEDA and your deployment use the same Kafka partition offset.

    Now, deploy the component to the cluster:

      Next, we will deploy the KEDA scaling object that monitors the lag on the specified Kafka topic and configures the Kubernetes Horizontal Pod Autoscaler (HPA) to scale your Dapr deployment in and out.

      A few things to review here in the above file:

      • name in the scaleTargetRef section in the spec: is the Dapr ID of your app defined in the Deployment (The value of the dapr.io/id annotation)
      • pollingInterval is the frequency in seconds with which KEDA checks Kafka for current topic partition offset
      • minReplicaCount is the minimum number of replicas KEDA creates for your deployment. (Note, if your application takes a long time to start it may be better to set that to 1 to ensure at least one replica of your deployment is always running. Otherwise, set that to and KEDA creates the first replica for you)
      • maxReplicaCount is the maximum number of replicas for your deployment. Given how works, you shouldn’t set that value higher than the total number of topic partitions
      • topic in the Kafka metadata section which should be set to the same topic to which your Dapr deployment subscribe (In this example demo-topic)
      • Similarly the bootstrapServers should be set to the same broker connection string used in the kafka-pubsub.yaml file
      • The consumerGroup should be set to the same value as the consumerID in the kafka-pubsub.yaml file

      Next, deploy the KEDA scaler to Kubernetes:

        All done!

        Now, that the ScaledObject KEDA object is configured, your deployment will scale based on the lag of the Kafka topic. More information on configuring KEDA for Kafka topics is available here.

        You can now start publishing messages to your Kafka topic demo-topic and watch the pods autoscale when the lag threshold is higher than topics, as we have defined in the KEDA scaler manifest. You can publish messages to the Kafka Dapr component by using the Dapr CLI command