Distributed Tracing with Java “MicroDonuts”, Kubernetes and the Edge Stack API Gateway
Start with Why?
Distributed Tracing 101
Distributed Tracing with the Ambassador Edge Stack API Gateway
Exploring the MicroDonuts Application
Deploying Microdonuts on Kubernetes with Ambassador
Deploying Zipkin and Ambassador in Kubernetes
Tracing the Donuts
What’s Next?
Distributing tracing is increasingly seen as an essential component for observing microservice-based applications. As a result, many of the modern microservice language frameworks are being provided with support for tracing implementations such as Open Zipkin, Jaeger, OpenCensus, and LightStep xPM. Google was one of the first organisations to talk about their use of distributed tracing in a paper that described their Dapper implementation, and one of the requirements that they concluded essentially was the need for ubiquitous deployment of the tracing system:
Ubiquity is important since the usefulness of a tracing infrastructure can be severely impacted if even small parts of the system are not being monitored
As I’ve written about previously, many engineers beginning a greenfield project or exploring a migration based on the microservice architecture often start by deploying a front proxy or API gateway at the edge to route traffic to independent services dynamically. As every inbound request flows through this component, an edge gateway will naturally need to support distributed tracing, ideally using a well-established open protocol.
This article explores how you can add the distributed tracing support provided by the open source Edge Stack API Gateway to the existing OpenTracing Java “MicroDonuts” demonstration application running in Kubernetes.
Start with Why?
As discussed by Cindy Sridharan in her article “Monitoring in the Time of Cloud Native.” Not only is distributed tracing considered one of the three pillars of modern observability (alongside metric monitoring and logging), but it provides developers with richer options for debugging distributed systems:
Tracing captures the lifetime of requests as they flow through the various components of a distributed system. The support for enriching the context that’s being propagated with additional key-value pairs makes it possible to encode application-specific metadata in the trace, which might give developers more debugging power.
In my experience with building and working microservices, tracing has been very useful when diagnosing issues, both in development and production. However, understanding the behavior of a service-based application is often a non-trivial task. Moreover, the challenge only deepens when you combine this with non-deterministic behavior exhibited by the system (particularly if deploying into cloud environments) or communication with unreliable third parties.
Distributed Tracing 101
The basic idea behind distributed tracing is relatively straightforward — specific inflection points that a request travels through must be identified within a system and instrumented. These inflexion points include, for example, the API Gateway, each internal service, and data stores or stateful external services. All of the trace data must be coordinated and collated to provide a meaningful view of a request; this is why you hear about the use of correlation identifiers to enable related trace data to be grouped for more meaningful analysis:
The CNCF-hosted OpenTracing API is becoming the de facto open tracing standard. Several popular open source frameworks, such as OpenZipkin and Jaeger, implement this and commercial options such as LightStep.. Many microservice frameworks now offer integrated or compatible tracing implementations. The Java Spring Boot stack provides Spring Cloud Sleuth with Zipkin integration, and the Golang micro framework provides OpenTracing wrappers.
Distributed Tracing with the Ambassador Edge Stack API Gateway
Recently, the Kubernetes-native Ambassador API gateway added distributed tracing support based on the functionality provided by the underlying Envoy Proxy at its core. Ambassador can now generate a request (correlation) identifier and populate the x-request-id HTTP header. Upstream services can forward this header to propagate the request context for use in tracing and unified aggregate logging.
The Ambassador tracing implementation currently supports Open Zipkin and Zipkin-compatible backends, such as Jaeger, and the commercial xPM offering from Lightstep. As with Envoy, when using the Zipkin tracer Ambassador adds the B3 HTTP headers, and when using the LightStep tracer the x-ot-span-context HTTP header will be added to any request sent upstream.
Exploring the MicroDonuts Application
The OpenTracing community has very helpfully contributed a series of example applications that demonstrate distributed tracing using all of the implementations mentioned. The “java-opentracing-walkthrough” GitHub repository provides a “MicroDonuts” example that provides traces for a web-based donut ordering application. This example is designed to run via Maven, and does not need any additional infrastructure, such as Docker and Kubernetes.
The MicroDonuts application is executed as a standalone application (with a single static void main entry point), but it provides several servlets that simulate running multiple services that can be used in the preparation of our donut orders:
void registerServlets() {kitchenConsumer = new KitchenConsumer();addServlet(new ServletHolder(new OrderServlet(kitchenConsumer)), "/order");addServlet(new ServletHolder(new StatusServlet(kitchenConsumer)), "/status");addServlet(new ServletHolder(new ConfigServlet(config)), "/config.js");}
The tracing component within the application is implemented using the OpenTracing Java SDK and the . Depending on the configuration file specified (more on this below), the initialises the Tracing framework:
} else if ("zipkin".equals(tracerName)){OkHttpSender sender = OkHttpSender.create("http://" +config.getProperty("zipkin.reporter_host") + ":" +config.getProperty("zipkin.reporter_port") + "/api/v1/spans");Reporter<Span> reporter = AsyncReporter.builder(sender).build();tracer = BraveTracer.create(Tracing.newBuilder().localServiceName(componentName).spanReporter(reporter).build());}
Spans are then created for each innovation of a servlet via an HTTP request. So, for example, if you look in the ApiContextHandler class you will see the modification I have made to the OrderServlet to extract the tracing headers from the current downstream request (which will be made by Ambassador after you request the app via your web browser). And assigns this span as a parent of the new span you are creating for each order of donuts:
@Overridepublic void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {TextMap headersTextMap = new TextMapExtractAdapter(getHeadersInfo(request));SpanContext parentSpanCtx = GlobalTracer.get().extract(Format.Builtin.HTTP_HEADERS, headersTextMap);try (Scope orderSpanScope = GlobalTracer.get().buildSpan("order_span").asChildOf(parentSpanCtx).startActive(true)) {request.setAttribute("span", orderSpanScope.span());...
With these modifications complete, all I needed to do was to package up the application for deployment on to Kubernetes.
Deploying Microdonuts on Kubernetes with Ambassador
Having used the MicroDonuts example to demonstrate tracing concepts at meetups, I often get asked about packaging this application for deployment to Kubernetes (as this is a popular platform for many organisations to run applications in production). I was keen to test the new Edge Stack distributed tracing functionality. So this provided the perfect excuse to package the MicroDonuts application in Docker and deploy this alongside Edge Stack on Kubernetes.
I’ve provided a deep dive into the approach I took in another post (along with the trials and tribulations I encountered!). Still, this article focuses on the results, and the aim is for you to get up and run, for example, within 10 minutes..
First clone my forked version of the project from https://github.com/danielbryantuk/java-opentracing-walkthrough and navigate into the directory
$ git clone https://github.com/danielbryantuk/java-opentracing-walkthrough$ cd java-opentracing-walkthrough
You will need an empty Kubernetes cluster configured and ready to go. I typically use Google’s Kubernetes Engine (GKE) with ephemeral instances which are configured via the gcloud SDK, as this provides the real cluster experience at a reasonable price point. However, you should be able to use minikube or Docker for Mac/Windows with minimal changes.
$ gcloud container clusters create ambassador-tracing-demo --preemptible...kubeconfig entry generated for ambassador-tracing-demo.NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUSambassador-tracing-demo us-central1-a 1.9.7-gke.5 35.226.58.170 n1-standard-1 1.9.7-gke.5 3 RUNNING$$ # As all GKE clusters enable RBAC by default, create a cluster-admin clusterrolebinding$ # for your user account$ kubectl create clusterrolebinding cluster-admin-binding-new \--clusterrole cluster-admin --user my.user.account@gmail.com
Feel free to explore the Dockerfile within this directory, although it is a fairly standard Java image with an OpenJDK 8 JRE. Next, navigate to the kubernetes-ambassador directory, which contains the config files necessary to bootstrap the demonstration:
$ cd kubernetes-ambassador/(master) kubernetes-ambassador $ ls -lsatotal 3040 drwxr-xr-x 8 danielbryant staff 256 12 Aug 15:13 .0 drwxr-xr-x 11 danielbryant staff 352 8 Aug 16:42 ..8 -rw-r--r--@ 1 danielbryant staff 2043 1 Aug 16:26 ambassador-rbac.yaml8 -rw-r--r-- 1 danielbryant staff 374 8 Aug 10:43 ambassador-service.yaml8 -rw-r--r-- 1 danielbryant staff 1145 12 Aug 15:10 microdonut.yaml8 -rw-r--r-- 1 danielbryant staff 576 12 Aug 14:53 tracing-config.yaml8 -rw-r--r-- 1 danielbryant staff 1037 8 Aug 11:16 zipkin.yaml
The Ambassador API gateway Deployment and admin Service is configured in ambassador-rbac.yaml file. A simple rewrite Mapping Edge Stack annotation example for the external httpbin.org service is contained in the ambassador-service.yaml file.
Deploying Zipkin and Ambassador in Kubernetes
The Zipkin Deployment and Service is configured within the zipkin.yaml file, which uses the OpenZipkin Docker image. There are two Ambassador annotations in this file: one TracingService to specify the service responsible for collecting Zipkin trace data. And one Mapping that will allow you to navigate to the Zipkin UI and examine traces in your browser. An excerpt of this config is shown below:
---apiVersion: v1kind: Servicemetadata:name: zipkinannotations:getambassador.io/config: |---apiVersion: ambassador/v0kind: TracingServicename: /tracing/service: zipkin:9411driver: zipkin---apiVersion: ambassador/v0kind: Mappingname: zipkin_mappingprefix: /zipkin/rewrite: ""service: zipkin:9411
The MicroDonut application that is defined in microdonut.yaml consists of a single Service and an associated Deployment that uses the container I have added to my DockerHub repository at danielbryantuk/microdonut:1.3. I have also added an Ambassador annotation for the Mapping of the service so that you can order some Donuts via the UI.
If you examine the microdonut.yaml file, you will see that I have specified a volume mount for the microdonut container and backed this with a configMap. The ConfigMap is defined in the tracing-config.yaml file, and the content is used to configure all of the tracing options for the MicroDonut app. This is the file to edit if you want to change from Zipkin to Jaeger tracing or alter the Zipkin collector Service host or port.
---kind: ConfigMapapiVersion: v1metadata:name: tracing-configdata:tracer_config.properties: |public_directory=../client// Selector for the below config blockstracer=zipkin// Jaeger configjaeger.reporter_host=localhostjaeger.reporter_port=5775// Zipkin configzipkin.reporter_host=zipkinzipkin.reporter_port=9411// LightStep configlightstep.collector_host=collector.lightstep.comlightstep.collector_port=80lightstep.access_token={your_token}
You can deploy all of the services and config specified within the YAML files within the kubernetes-ambassador directory like so:
$ kubectl apply -f .service "ambassador-admin" createdclusterrole "ambassador" createdserviceaccount "ambassador" createdclusterrolebinding "ambassador" createddeployment "ambassador" createdservice "ambassador" createdservice "microdonut" createddeployment "microdonut" createdconfigmap "tracing-config" createdservice "zipkin" createddeployment "zipkin" created
You can now query for all of the services via kubectl, although if using GKE you may have to wait a short while before the Ambassador LoadBalancer Service gets an external IP (initially a query may result in “<pending>”
$ kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEambassador LoadBalancer 10.51.248.134 35.224.129.220 80:30306/TCP 1mambassador-admin NodePort 10.51.245.207 <none> 8877:32035/TCP 1mkubernetes ClusterIP 10.51.240.1 <none> 443/TCP 1mmicrodonut ClusterIP 10.51.245.91 <none> 10001/TCP 1mzipkin NodePort 10.51.244.91 <none> 9411:31899/TCP 1m
You can now view the MicroDonut web page by visiting:
http://<external-ip>/microdonut/
You can also view the Zipkin dashboard by visiting:
http://<external-ip>/zipkin/
With everything set up, now all you need to do is order some don
Tracing the Donuts
To see some interesting traces, you will need to order some donuts. Simply click on several pictures of the donuts on the MicroDonut web page, and click “order” in the icon that appears below the donut images. After you do this, you should see a countdown in the icon and the display of a series of steps in the donut preparation phases (“add”, “wait”, “cooking” etc). Feel free to place several orders, but try not to get too hungry when doing this!
Next, open the Zipkin dashboard and click “Find Traces”. Every request that passes through Ambassador (including any request to the Zipkin dashboard) is traced, and therefore you will have to identify a trace involved in the preparation of donuts.
Typically, most of the shorter traces with 2 spans (two service hops) are related to the Zipkin dashboard, and the longer traces with 9+ hops (multiple service hops, and multiple spans created per service) are related to the MicroDonut application. You may need to make several donut orders and quickly switch to the Zipkin dash to find a related trace, as these can quickly get replaced in the UI search results by other requests being traced by Ambassador.
In the screenshot below, you can see in my example, a donut order created 11 spans and took 184.084 ms to complete.
You can click on the trace in order to get a more detailed breakdown of how the request was handled, e.g.:
Here you can see that Ambassador dealt with the ingress request before passing this upstream to the microdonut service, where the “order_span” begins. The “ambassador-default” service name is visible in the trace, and I’m not sure why the “microdonut” service name does not appear, as it does look to be correctly specified within the Zipkin (Brave / OpenTracing) configuration of the MicroDonut application.
Another interesting thing to note is that even though you are making multiple requests to the same MicroDonut application during the ordering and cooking of your donut. Each request is an out-of-process HTTP request made via the localhost loopback adapter (and not going via Ambassador). The Brave implement is propagating the span information to join all of these traces correctly.
What’s Next?
Given this sample application and Ambassador config, you should be able to get any Zipkin compatible application tracing up and running with minimal hassle. As Ambassador injects the Zipkin headers into an upstream request, any application that recognises these headers (and propagates them onwards) should be traceable.
The example Java code shows how to implement the Zipkin header processing using the Brave library (and how to attach the Edge Stack generated span as a parent to each child span). I’ll go into more detail of how I modified the MicroDonut example in another post. However, don’t let the fact that the example was written in Java stop you. Any language or framework that supports Zipkin should work right out of the box with the provided config — all you need to specify is the Zipkin Kubernetes Service host and port, and you should be good to go!
Learn more about Ambassador Labs and the Edge Stack Distributed Tracing feature here.