MeshTrace (beta)

    This policy enables publishing traces to a third party tracing solution.

    Tracing is supported over HTTP, HTTP2, and gRPC protocols. You must explicitly specify the protocol for each service and data plane proxy you want to enable tracing for.

    Kuma currently supports the following trace exposition formats:

    • traces in this format can be sent to
    • datadog

    Services still need to be instrumented to preserve the trace chain across requests made across different services.

    You can instrument with a language library of your choice (for zipkin and ). For HTTP you can also manually forward the following headers:

    • x-request-id
    • x-b3-traceid
    • x-b3-parentspanid
    • x-b3-spanid
    • x-b3-sampled
    • x-b3-flags

    To learn more about the information in this table, see the .

    Most of the time setting only overall is sufficient. random and client are for advanced use cases.

    You can configure sampling settings equivalent to Envoy’s:

    The value is always a percentage and is between 0 and 100.

    Example:

    Tags

    You can add tags to trace metadata by directly supplying the value (literal) or by taking it from a header (header).

    Example:

    1. tags:
    2. - name: team
    3. literal: core
    4. - name: env
    5. header:
    6. name: x-env
    7. default: prod
    8. - name: version
    9. header:
    10. name: x-version

    Datadog

    You can configure a Datadog backend with a url and splitService.

    Example:

    1. datadog:
    2. url: http://my-agent:8080 # Required. The url to reach a running datadog agent
    3. splitService: true # Default to false. If true, it will split inbound and outbound requests in different services in Datadog

    The splitService property determines if Datadog service names should be split based on traffic direction and destination. For example, with splitService: true and a backend service that communicates with a couple of databases, you would get service names like backend_INBOUND, backend_OUTBOUND_db1, and backend_OUTBOUND_db2 in Datadog.

    Zipkin

    In most cases the only field you’ll want to set in url.

    Example:

    1. zipkin:
    2. url: http://jaeger-collector:9411/api/v2/spans # Required. The url to a zipkin collector to send traces to
    3. traceId128bit: false # Default to false which will expose a 64bits traceId. If true, the id of the trace is 128bits
    4. apiVersion: httpJson # Default to httpJson. It can be httpJson, httpProto and is the version of the zipkin API
    5. sharedSpanContext: false # Default to true. If true, the inbound and outbound traffic will share the same span.

    Zipkin

    Simple example:

    1. apiVersion: kuma.io/v1alpha1
    2. kind: MeshTrace
    3. metadata:
    4. name: default
    5. namespace: kuma-system
    6. labels:
    7. kuma.io/mesh: default # optional, defaults to `default` if unset
    8. spec:
    9. targetRef:
    10. kind: Mesh
    11. default:
    12. backends:
    13. - zipkin:
    14. url: http://jaeger-collector.mesh-observability:9411/api/v2/spans

    Full example:

    Apply the configuration with kubectl apply -f [..].

    Simple example:

    1. type: MeshTrace
    2. name: default
    3. mesh: default
    4. spec:
    5. targetRef:
    6. kind: Mesh
    7. default:
    8. backends:
    9. - zipkin:
    10. url: http://jaeger-collector:9411/api/v2/spans

    Full example:

    1. name: default
    2. spec:
    3. targetRef:
    4. kind: Mesh
    5. default:
    6. backends:
    7. - zipkin:
    8. url: http://jaeger-collector:9411/api/v2/spans
    9. apiVersion: httpJson
    10. tags:
    11. - name: team
    12. literal: core
    13. - name: env
    14. header:
    15. name: x-env
    16. default: prod
    17. - name: version
    18. header:
    19. name: x-version
    20. sampling:
    21. overall: 80
    22. random: 60
    23. client: 40

    Apply the configuration with kumactl apply -f [..] or with the .

    This assumes a Datadog agent is configured and running. If you haven’t already check the .

    1. apiVersion: kuma.io/v1alpha1
    2. kind: MeshTrace
    3. metadata:
    4. name: default
    5. namespace: kuma-system
    6. labels:
    7. kuma.io/mesh: default # optional, defaults to `default` if unset
    8. spec:
    9. targetRef:
    10. kind: Mesh
    11. default:
    12. backends:
    13. - datadog:
    14. url: http://trace-svc.default.svc.cluster.local:8126

    Full Example:

    1. apiVersion: kuma.io/v1alpha1
    2. kind: MeshTrace
    3. metadata:
    4. name: default
    5. namespace: kuma-system
    6. labels:
    7. kuma.io/mesh: default # optional, defaults to `default` if unset
    8. spec:
    9. targetRef:
    10. kind: Mesh
    11. default:
    12. backends:
    13. - datadog:
    14. url: http://trace-svc.default.svc.cluster.local:8126
    15. splitService: true
    16. tags:
    17. - name: team
    18. literal: core
    19. - name: env
    20. header:
    21. name: x-env
    22. default: prod
    23. - name: version
    24. header:
    25. name: x-version
    26. sampling:
    27. random: 60
    28. client: 40

    where trace-svc is the name of the Kubernetes Service you specified when you configured the Datadog APM agent.

    Apply the configuration with kubectl apply -f [..].

    Simple example:

    Full example:

    1. type: MeshTrace
    2. mesh: default
    3. spec:
    4. targetRef:
    5. kind: Mesh
    6. default:
    7. backends:
    8. - datadog:
    9. url: http://127.0.0.1:8126
    10. splitService: true
    11. tags:
    12. - name: team
    13. literal: core
    14. - name: env
    15. header:
    16. name: x-env
    17. default: prod
    18. - name: version
    19. header:
    20. name: x-version
    21. sampling:
    22. overall: 80
    23. random: 60
    24. client: 40

    Apply the configuration with kumactl apply -f [..] or with the HTTP API.

    Targeting parts of the infrastructure

    While usually you want all the traces to be sent to the same tracing backend, you can target parts of a Mesh by using a finer-grained targetRef and a designated backend to trace different paths of our service traffic. This is especially useful when you want traces to never leave a world region, or a cloud, for example.

    In this example, we have two zones east and west, each of these with their own Zipkin collector: east.zipkincollector:9411/api/v2/spans and west.zipkincollector:9411/api/v2/spans. We want dataplane proxies in each zone to only send traces to their local collector.

    To do this, we use a TargetRef kind value of MeshSubset to filter which dataplane proxy a policy applies to.

    West only policy:

    1. type: MeshTrace
    2. name: trace-west
    3. mesh: default
    4. spec:
    5. targetRef:
    6. kind: MeshSubset
    7. tags:
    8. kuma.io/zome: west
    9. default:
    10. backends:
    11. - zipkin:
    12. url: http://west.zipkincollector:9411/api/v2/spans

    East only policy:

    1. type: MeshTrace
    2. name: trace-east
    3. mesh: default
    4. spec:
    5. targetRef:
    6. kind: MeshSubset
    7. tags:
    8. kuma.io/zome: east
    9. default:
    10. backends:
    11. - zipkin:
    12. url: http://east.zipkincollector:9411/api/v2/spans

    West only policy:

    1. apiVersion: kuma.io/v1alpha1
    2. kind: MeshTrace
    3. metadata:
    4. name: trace-west
    5. namespace: kuma-system
    6. spec:
    7. targetRef:
    8. kind: MeshSubset
    9. tags:
    10. kuma.io/zome: west
    11. default:
    12. backends:
    13. - zipkin: