Apache Kafka Broker

    Notable features are:

    .

    Installation

    1. Install the Kafka controller by entering the following command:

    2. Install the Kafka Broker data plane by entering the following command:

    3. Verify that kafka-controller, kafka-broker-receiver and kafka-broker-dispatcher are running, by entering the following command:

      1. kubectl get deployments.apps -n knative-eventing

    A Kafka Broker object looks like this:

    1. apiVersion: eventing.knative.dev/v1
    2. kind: Broker
    3. metadata:
    4. annotations:
    5. # case-sensitive
    6. eventing.knative.dev/broker.class: Kafka
    7. name: default
    8. namespace: default
    9. spec:
    10. # Configuration specific to this broker.
    11. config:
    12. kind: ConfigMap
    13. name: kafka-broker-config
    14. namespace: knative-eventing
    15. # Optional dead letter sink, you can specify either:
    16. # - deadLetterSink.ref, which is a reference to a Callable
    17. # - deadLetterSink.uri, which is an absolute URI to a Callable (It can potentially be out of the Kubernetes cluster)
    18. delivery:
    19. deadLetterSink:
    20. ref:
    21. apiVersion: serving.knative.dev/v1
    22. kind: Service
    23. name: dlq-service

    spec.config should reference any that looks like the following:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: kafka-broker-config
    5. namespace: knative-eventing
    6. data:
    7. # Number of topic partitions
    8. default.topic.partitions: "10"
    9. # Replication factor of topic messages.
    10. default.topic.replication.factor: "1"
    11. # A comma separated list of bootstrap servers. (It can be in or out the k8s cluster)
    12. bootstrap.servers: "my-cluster-kafka-bootstrap.kafka:9092"

    The above ConfigMap is installed in the cluster. You can edit the configuration or create a new one with the same values depending on your needs.

    Set as default broker implementation

    To set the Kafka broker as the default implementation for all brokers in the Knative deployment, you can apply global settings by modifying the config-br-defaults ConfigMap in the knative-eventing namespace.

    This allows you to avoid configuring individual or per-namespace settings for each broker, such as metadata.annotations.eventing.knative.dev/broker.class or spec.config.

    Knative exposes all available Kafka producer and consumer configurations that can be modified to suit your workloads.

    You can change these configurations by modifying the config-kafka-broker-data-plane ConfigMap in the knative-eventing namespace.

    Documentation for the settings available in this ConfigMap is available on the Apache Kafka website, in particular, and Consumer configurations.

    Enable debug logging for data plane components

    The following YAML shows the default logging configuration for data plane components, that is created during the installation step:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: kafka-config-logging
    5. data:
    6. config.xml: |
    7. <appender name="jsonConsoleAppender" class="ch.qos.logback.core.ConsoleAppender">
    8. <encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
    9. </appender>
    10. <root level="INFO">
    11. <appender-ref ref="jsonConsoleAppender"/>
    12. </root>
    13. </configuration>

    To change the logging level to DEBUG, you need to:

      1. apiVersion: v1
      2. kind: ConfigMap
      3. metadata:
      4. name: kafka-config-logging
      5. namespace: knative-eventing
      6. data:
      7. config.xml: |
      8. <configuration>
      9. <appender name="jsonConsoleAppender" class="ch.qos.logback.core.ConsoleAppender">
      10. <encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
      11. </appender>
      12. <root level="DEBUG">
      13. <appender-ref ref="jsonConsoleAppender"/>
      14. </root>
      15. </configuration>
    1. Restart the and the kafka-broker-dispatcher, by entering the following commands: