Recently, I had the opportunity to work on a complex microservice using Spring boot and Quarkus to develop a POST endpoint for submitting data to a third-party service. The complexity arose from the distributed nature of the application and the numerous interdependent services (such as Token and Login) required to complete the task.
In any distributed application, several challenges can arise, including:
Many of us rely on IDE debugger and our old favorite logging methods to get the job done. However, in today's article, I would like to introduce a cooler (and often faster) way to debug and develop quality code. After all, who wouldn’t want some extra free time to spend with family?
If you're not familiar with OTel, I recommend reading this https://opentelemetry.io/docs/languages/go/getting-started/ quick article before proceeding.
quarkus.application.name=$yourServiceName
quarkus.http.port=8090
quarkus.opentelemetry.enabled=true
quarkus.opentelemetry.tracer.enabled=true
quarkus.opentelemetry.tracer.exporter.otlp.endpoint=http://localhost:9090
quarkus.opentelemetry.tracer.exporter.otlp.endpoint=http://10.209.44.220:4317
quarkus.opentelemetry.tracer.log-trace-context.enabled=true
quarkus.opentelemetry.tracer.log-trace-context.span-log-attribute=spanId
quarkus.opentelemetry.tracer.log-trace-context.trace-log-attribute=traceId
quarkus.log.console.level=DEBUG
#quarkus.log.console.format=%d{HH:mm:ss} %-5p traceId=%X{traceId}, spanId=%X{spanId}, sampled=%X{sampled} [%c{2.}] (%t) %s%e%n
quarkus.log.console.json=true
quarkus.otel.resource.attributes=true
2. Add quarkus-opentelemetry dependence in your project pom.xml https://quarkus.io/extensions/io.quarkus/quarkus-opentelemetry-exporter-otlp/
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-opentelemetry</artifactId>
</dependency>
3. Run distributed tracing observability platforms, such as Jaeger, Zipkin, or Splunk, locally. This step is a game-changer in the development process.
Observability platforms are resource-light but provide a wealth of in-context traceability information, helping you understand and debug your code more effectively.
Use following docker run command to start the jaeger all-in-one container on your local. This is a one time setup on your local to get backend to view the traces.
docker run --rm --name jaeger \
-e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 4317:4317 \
-p 4318:4318 \
-p 14250:14250 \
-p 14268:14268 \
-p 14269:14269 \
-p 9411:9411 \
jaegertracing/all-in-one:1.57
More - https://www.jaegertracing.io/docs/1.57/getting-started/
You are all set to embark on a modern distributed development environment.
Use http://localhost:16686/ to access OTel data. Think of it as accessing logs but in a much more digestible format.
Thanks to OTel auto-instrumentation, you'll gain valuable insights into your applications.
As you can see, the above screenshot provides all the integration points of your application (Service A), including the status and execution time across these touchpoints. Voila!!!
All of this is available with ZERO extra lines of code. As a developer, you can immediately appreciate the power of the telemetry data for your service. Compare this with sifting through logs to pinpoint issues during development—it's a game-changer!
So far, so good 👍
Now, I want to demonstrate another powerful addition you can include in your application: OTEl manual instrumentation.
It's easy to incorporate into your development process. Instead of using logger (debug, info), you can use Span and set attributes (e.g., request map).
Here is a sample code to do exactly that:
....
import io.opentelemetry.api.trace.Span;
import io.opentelemetry.api.trace.StatusCode;
import io.opentelemetry.instrumentation.annotations.WithSpan;
....
@WithSpan
public DataResponse postFromData(Map<String, Object> requestMap) {
DataResponse dataResponse = new DataResponse();
FormPostResponse responseEntity = null;
Span span = Span.current();
...... try {
requestBody = objectMapper.writeValueAsString(requestMap);
} catch (JsonProcessingException ex) {
log.error(
"postFromData: Failed to create the request. Exception:{} ",
ex.getMessage());
span.setStatus(StatusCode.ERROR, "Post Data Exception");
}
try {
HiddenFormPostRequest marketoFormPostRequest =
createMarketoHiddenPostRequest(requestMap);
String marketoFormPostRequestJson =
objectMapper.writeValueAsString(marketoFormPostRequest);
log.debug("createMarketoPostRequestFormRequestBody: formPostRequest {}",
marketoFormPostRequestJson);
responseEntity =
marketoRestService.submitFormDataToMaketo(marketoFormPostRequestJson);
return marketoDataResponse;
} catch (Exception ex) {
log.error(
"postHiddenFromData: Failed to create the POST request. Request: {}, Exception:{} ", ex.getMessage());
getResponse(dataResponse, HttpStatus.SC_INTERNAL_SERVER_ERROR,
INTERNAL_SERVER_ERROR, null, ex.getMessage());
span.setStatus(StatusCode.ERROR, "Data Exception");
span.recordException( ErrorCode.POST_ERROR,"Data Exception");
}
span.setStatus(StatusCode.OK, "Dada Form Success");
span.setAttribute("request ", requestBody);
return dataResponse;
}
You may have noticed a significant difference between logging and tracing. Tracing makes it much easier to attach your custom logic to spans for observability.
try{
......
...
} catch (Exception ex) {
dataResponse.setMessage(Response.Status.FORBIDDEN.getReasonPhrase());
span.setStatus(StatusCode.ERROR, "Form Data Exception");
span.setAttribute("hiddenForm contactId", contactId)
span.recordException( new CommonsException(ErrorCode.INVALID_INPUT,"Contact Id is Empty or not Valid"));
.......
}
Now, you can use the data via APM to troubleshoot and understand the code. OTel Span This allows you to verify that all critical paths of functionality across services are covered by the code.
By using manual instrumentation, you gain more control over what gets traced and logged, enhancing your ability to debug and monitor your application's performance.
I should stop here. If I've excited you enough, you might be interested in the .conf24 session lineup on OTel at this year's .conf:
OBS1125B - Take Your Splunk® Observability Game to the Next Level with Tags
Derek Mitchell , Global Observability Specialist, Splunk
DEM1978 - OpenTelemetry™: Own Your Data. Improve Performance. Deliver Software Faster.
OBS1875C - Adopting OpenTelemetry at Yahoo: The Good, The Bad and The Ugly
Click on for the all OTel sessions this year at .conf!
You can also watch these sessions online from the comfort of your home! Take a look here and see you at .conf!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.