Traffic Trace

    Tracing is supported over HTTP, HTTP2, and gRPC protocols in a . You must for each service and data plane proxy you want to enable tracing for.

    You must also:

    1. Add a tracing backend. You specify a tracing backend as a resource property.
    2. Add a TrafficTrace resource. You pass the backend to the TrafficTrace resource.

    Kuma currently supports the following backends:

    • zipkin
      • as the Zipkin collector. The Zipkin examples specify Jaeger, but you can modify for a Zipkin-only deployment.
    • datadog

    While most commonly we want all the traces to be sent to the same tracing backend, we can optionally create multiple tracing backends in a Mesh resource and store traces for different paths of our service traffic in different backends by leveraging Kuma tags. This is especially useful when we want traces to never leave a world region, or a cloud, for example.

    On Kubernetes you can deploy Jaeger automatically in a kuma-tracing namespace with kumactl install tracing | kubectl apply -f -.

    Apply the configuration with kubectl apply -f [..]. :::

    1. type: Mesh
    2. name: default
    3. tracing:
    4. defaultBackend: jaeger-collector
    5. backends:
    6. - name: jaeger-collector
    7. sampling: 100.0
    8. conf:
    9. url: http://jaeger-collector.kuma-tracing:9411/api/v2/spans
    1. Set up the DatadogTraffic Trace - 图2 (opens new window) agent.
    2. Set up .

    If Datadog is running within Kubernetes, you can expose the APM agent port to Kuma via Kubernetes service.

    Apply the configuration with kubectl apply -f [..].

    Set up in Kuma

    1. kind: Mesh
    2. metadata:
    3. name: default
    4. spec:
    5. tracing:
    6. defaultBackend: datadog-collector
    7. backends:
    8. - name: datadog-collector
    9. type: datadog
    10. sampling: 100.0
    11. address: trace-svc.datadog.svc.cluster.local
    12. port: 8126

    where trace-svc is the name of the Kubernetes Service you specified when you configured the Datadog APM agent.

    Apply the configuration with kubectl apply -f [..].

    Apply the configuration with kumactl apply -f [..] or with the HTTP API.

    The defaultBackend property specifies the tracing backend to use if it’s not explicitly specified in the TrafficTrace resource.

    1. apiVersion: kuma.io/v1alpha1
    2. mesh: default
    3. metadata:
    4. name: trace-all-traffic
    5. spec:
    6. selectors:
    7. - match:
    8. kuma.io/service: '*'
    9. conf:
    10. backend: jaeger-collector # or the name of any backend defined for the mesh

    Apply the configuration with kubectl apply -f [..].

    Apply the configuration with kumactl apply -f [..] or with the .

    You can also add tags to apply the TrafficTrace resource only a subset of data plane proxies. TrafficTrace is a Dataplane policy, so you can specify any of the selectors tags.

    Services should also be instrumented to preserve the trace chain across requests made across different services. You can instrument with a language library of your choice, or you can manually pass the following headers:

    • x-request-id
    • x-b3-traceid
    • x-b3-parentspanid
    • x-b3-spanid
    • x-b3-flags

    To visualise your traces you need to have a Grafana up and running. You can install Grafana by following the information of the or use the one installed with Traffic metrics.

    With Grafana installed you can configure a new datasource with url: so Grafana will be able to retrieve the traces from Jaeger.