Product News & Announcements
All the latest news and announcements about Splunk products. Subscribe and never miss an update!

Modern way of developing distributed application using OTel

Splunk Employee
Splunk Employee

Recently, I had the opportunity to work on a complex microservice using Spring boot and Quarkus to develop a POST endpoint for submitting data to a third-party service. The complexity arose from the distributed nature of the application and the numerous interdependent services (such as Token and Login) required to complete the task.

In any distributed application, several challenges can arise, including:

  • Lack of Familiarity with Interdependent Services: This can lead to missing an understanding of the underlying code.
  • Original Developer Absence: The developers who initially created the services may have moved on to different projects or companies.
  • Extended Development Time: These factors can result in taking a few extra sprints to complete new enhancements to the service.

Many of us rely on IDE debugger and our old favorite logging methods to get the job done. However, in today's article, I would like to introduce a cooler (and often faster) way to debug and develop quality code. After all, who wouldn’t want some extra free time to spend with family?

If you're not familiar with OTel, I recommend reading this quick article before proceeding.

  1. To set up your Quarkus application for OTel, note that Quarkus supports OpenTelemetry auto-configuration for traces. The configurations align with what you find in the OpenTelemetry SDK Autoconfigure, with the usual quarkus.* prefix added.$yourServiceName
#quarkus.log.console.format=%d{HH:mm:ss} %-5p traceId=%X{traceId}, spanId=%X{spanId}, sampled=%X{sampled} [%c{2.}] (%t) %s%e%n



2. Add quarkus-opentelemetry dependence in your project pom.xml






3. Run distributed tracing observability platforms, such as Jaeger, Zipkin, or Splunk, locally. This step is a game-changer in the development process.

Observability platforms are resource-light but provide a wealth of in-context traceability information, helping you understand and debug your code more effectively.

Use following docker run command to start the jaeger all-in-one container on your local. This is a one time setup on your local to get backend to view the traces.



docker run --rm --name jaeger \
  -p 6831:6831/udp \
  -p 6832:6832/udp \
  -p 5778:5778 \
  -p 16686:16686 \
  -p 4317:4317 \
  -p 4318:4318 \
  -p 14250:14250 \
  -p 14268:14268 \
  -p 14269:14269 \
  -p 9411:9411 \



More -


You are all set to embark on a modern distributed development environment.

Use http://localhost:16686/ to access OTel data. Think of it as accessing logs but in a much more digestible format.

Thanks to OTel auto-instrumentation, you'll gain valuable insights into your applications.


As you can see, the above screenshot provides all the integration points of your application (Service A), including the status and execution time across these touchpoints. Voila!!!

All of this is available with ZERO extra lines of code. As a developer, you can immediately appreciate the power of the telemetry data for your service. Compare this with sifting through logs to pinpoint issues during development—it's a game-changer!

So far, so good 👍

Now, I want to demonstrate another powerful addition you can include in your application: OTEl manual instrumentation.

It's easy to incorporate into your development process. Instead of using logger (debug, info), you can use Span and set attributes (e.g., request map).

Here is a sample code to do exactly that:



import io.opentelemetry.api.trace.Span;
import io.opentelemetry.api.trace.StatusCode;
import io.opentelemetry.instrumentation.annotations.WithSpan;

public DataResponse postFromData(Map<String, Object> requestMap) {
  DataResponse dataResponse = new DataResponse();
  FormPostResponse responseEntity = null;
  Span span = Span.current();

  ...... try {
    requestBody = objectMapper.writeValueAsString(requestMap);
  } catch (JsonProcessingException ex) {
        "postFromData: Failed to create the  request. Exception:{} ",
    span.setStatus(StatusCode.ERROR, "Post Data Exception");

  try {
    HiddenFormPostRequest marketoFormPostRequest =
    String marketoFormPostRequestJson =
    log.debug("createMarketoPostRequestFormRequestBody: formPostRequest {}",
    responseEntity =
    return marketoDataResponse;
  } catch (Exception ex) {
        "postHiddenFromData: Failed to create the POST request. Request: {}, Exception:{} ", ex.getMessage());
    getResponse(dataResponse, HttpStatus.SC_INTERNAL_SERVER_ERROR,
        INTERNAL_SERVER_ERROR, null, ex.getMessage());
    span.setStatus(StatusCode.ERROR, "Data Exception");
                        span.recordException( ErrorCode.POST_ERROR,"Data Exception");
  span.setStatus(StatusCode.OK, "Dada Form Success");
  span.setAttribute("request ", requestBody);
  return dataResponse;



You may have noticed a significant difference between logging and tracing. Tracing makes it much easier to attach your custom logic to spans for observability.




} catch (Exception ex) {
    span.setStatus(StatusCode.ERROR, "Form Data Exception");
    span.setAttribute("hiddenForm contactId", contactId)
    span.recordException( new CommonsException(ErrorCode.INVALID_INPUT,"Contact Id  is Empty or not Valid"));



Now, you can use the data via APM to troubleshoot and understand the code. OTel Span This allows you to verify that all critical paths of functionality across services are covered by the code.


By using manual instrumentation, you gain more control over what gets traced and logged, enhancing your ability to debug and monitor your application's performance.

I should stop here. If I've excited you enough, you might be interested in the .conf24 session lineup on OTel at this year's .conf:

OBS1125B - Take Your Splunk® Observability Game to the Next Level with Tags

Derek Mitchell , Global Observability Specialist, Splunk

DEM1978 - OpenTelemetry™: Own Your Data. Improve Performance. Deliver Software Faster.

OBS1875C - Adopting OpenTelemetry at Yahoo: The Good, The Bad and The Ugly

Click on for the all OTel sessions this year at .conf!

You can also watch these sessions online from the comfort of your home! Take a look here and see you at .conf!

Get Updates on the Splunk Community!

Stay Connected: Your Guide to July and August Tech Talks, Office Hours, and Webinars!

Dive into our sizzling summer lineup for July and August Community Office Hours and Tech Talks. Scroll down to ...

Edge Processor Scaling, Energy & Manufacturing Use Cases, and More New Articles on ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Get More Out of Your Security Practice With a SIEM

Get More Out of Your Security Practice With a SIEMWednesday, July 31, 2024  |  11AM PT / 2PM ETREGISTER ...