Autoscaling a Dapr app with KEDA
Dapr, with its modular building-block approach, along with the 10+ different pub/sub components, make it easy to write message processing applications. Since Dapr can run in many environments (e.g. VM, bare-metal, Cloud, or Edge) the autoscaling of Dapr applications is managed by the hosting later.
For Kubernetes, Dapr integrates with , an event driven autoscaler for Kubernetes. Many of Dapr’s pub/sub components overlap with the scalers provided by KEDA so it’s easy to configure your Dapr deployment on Kubernetes to autoscale based on the back pressure using KEDA.
This how-to walks through the configuration of a scalable Dapr application along with the back pressure on Kafka topic, however you can apply this approach to any offered by Dapr.
To install KEDA, follow the Deploying KEDA instructions on the KEDA website.
If you don’t have access to a Kafka service, you can install it into your Kubernetes cluster for this example by using Helm:
To check on the status of the Kafka deployment:
kubectl rollout status deployment.apps/kafka-cp-ksql-server -n kafka
kubectl rollout status statefulset.apps/kafka-cp-kafka -n kafka
Next, create the topic which is used in this example (for example demo-topic
):
kubectl -n kafka exec -it kafka-client -- kafka-topics \
--zookeeper kafka-cp-zookeeper-headless:2181 \
--topic demo-topic \
--create \
--partitions 10 \
--replication-factor 3 \
--if-not-exists
Next, we’ll deploy the Dapr Kafka pub/sub component for Kubernetes. Paste the following YAML into a file named kafka-pubsub.yaml
:
The above YAML defines the pub/sub component that your application subscribes to, the we created above. If you used the Kafka Helm install instructions above you can leave the brokers
value as is. Otherwise, change this to the connection string to your Kafka brokers.
Also notice the autoscaling-subscriber
value set for consumerID
which is used later to make sure that KEDA and your deployment use the same .
Now, deploy the component to the cluster:
Paste the following into a file named kafka_scaler.yaml
, and configure your Dapr deployment in the required place:
A few things to review here in the above file:
name
in thescaleTargetRef
section in thespec:
is the Dapr ID of your app defined in the Deployment (The value of thedapr.io/id
annotation)pollingInterval
is the frequency in seconds with which KEDA checks Kafka for current topic partition offsetminReplicaCount
is the minimum number of replicas KEDA creates for your deployment. (Note, if your application takes a long time to start it may be better to set that to1
to ensure at least one replica of your deployment is always running. Otherwise, set that to and KEDA creates the first replica for you)maxReplicaCount
is the maximum number of replicas for your deployment. Given how Kafka partition offset works, you shouldn’t set that value higher than the total number of topic partitionstopic
in the Kafkametadata
section which should be set to the same topic to which your Dapr deployment subscribe (In this exampledemo-topic
)- Similarly the
bootstrapServers
should be set to the same broker connection string used in thekafka-pubsub.yaml
file - The
consumerGroup
should be set to the same value as theconsumerID
in thekafka-pubsub.yaml
file
Next, deploy the KEDA scaler to Kubernetes:
All done!
Now, that the ScaledObject
KEDA object is configured, your deployment will scale based on the lag of the Kafka topic. More information on configuring KEDA for Kafka topics is available .