Zipkin Tracing

    All incoming requests are routed via the front Envoy, which is acting as a reverse proxy sitting on the edge of the envoymesh network. Port 80 is mapped to port 8000 by docker compose (see /examples/zipkin-tracing/docker-compose.yaml). Notice that all Envoys are configured to collect request traces (e.g., http_connection_manager/config/tracing setup in ) and setup to propagate the spans generated by the Zipkin tracer to a Zipkin cluster (trace driver setup in /examples/zipkin-tracing/front-envoy-zipkin.yaml).

    Before routing a request to the appropriate service Envoy or the application, Envoy will take care of generating the appropriate spans for tracing (parent/child/shared context spans). At a high-level, each span records the latency of upstream API calls as well as information needed to correlate the span with other related spans (e.g., the trace ID).

    The following documentation runs through the setup of an Envoy cluster organized as is described in the image above.

    Step 1: Build the sandbox

    Step 2: Generate some load

    You can now send a request to service1 via the front-envoy as follows:

    1. $ curl -v localhost:8000/trace/1
    2. * Trying 192.168.99.100...
    3. * Connected to 192.168.99.100 (192.168.99.100) port 8000 (#0)
    4. > User-Agent: curl/7.43.0
    5. > Accept: */*
    6. >
    7. < HTTP/1.1 200 OK
    8. < content-type: text/html; charset=utf-8
    9. < x-envoy-upstream-service-time: 1
    10. < server: envoy
    11. < date: Fri, 26 Aug 2018 19:39:19 GMT
    12. <
    13. Hello from behind Envoy (service 1)! hostname: f26027f1ce28 resolvedhostname: 172.19.0.6

    Point your browser to . You should see the Zipkin dashboard. Set the service to “front-proxy” and set the start time to a few minutes before the start of the test (step 2) and hit enter. You should see traces from the front-proxy. Click on a trace to explore the path taken by the request from front-proxy to service1 to service2, as well as the latency incurred at each hop.