Jaeger Native Tracing
Trace propagation will work with the other services using Jaeger without needing to make configuration changes.
A variety of different can be used, including probabilistic or remote where sampling can be centrally controlled from Jaeger’s backend.
Spans are sent to the collector in a more efficient binary encoding.
This sandbox is very similar to the front proxy architecture described above, with one difference: service1 makes an API call to service2 before returning a response. The three containers will be deployed inside a virtual network called . (Note: the sandbox only works on x86-64).
Before routing a request to the appropriate service Envoy or the application, Envoy will take care of generating the appropriate spans for tracing (parent/child context spans). At a high-level, each span records the latency of upstream API calls as well as information needed to correlate the span with other related spans (e.g., the trace ID).
One of the most important benefits of tracing from Envoy is that it will take care of propagating the traces to the Jaeger service cluster. However, in order to fully take advantage of tracing, the application has to propagate trace headers that Envoy generates, while making calls to other services. In the sandbox we have provided, the simple flask app (see trace function in /examples/front-proxy/service.py) acting as service1 propagates the trace headers while making an outbound call to service2.
The following documentation runs through the setup of an Envoy cluster organized as is described in the image above.
Step 1: Build the sandbox
Step 2: Generate some load
You can now send a request to service1 via the front-envoy as follows:
$ curl -v localhost:8000/trace/1
* Trying 192.168.99.100...
> Host: 192.168.99.100:8000
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< x-envoy-upstream-service-time: 9
< server: envoy
< date: Fri, 26 Aug 2018 19:39:19 GMT
<
Hello from behind Envoy (service 1)! hostname: f26027f1ce28 resolvedhostname: 172.19.0.6
Step 3: View the traces in Jaeger UI
Point your browser to . You should see the Jaeger dashboard. Set the service to “front-proxy” and hit ‘Find Traces’. You should see traces from the front-proxy. Click on a trace to explore the path taken by the request from front-proxy to service1 to service2, as well as the latency incurred at each hop.