Product News & Announcements
All the latest news and announcements about Splunk products. Subscribe and never miss an update!

Modern way of developing distributed application using OTel

yogesh-kulkarni
Splunk Employee
Splunk Employee

Recently, I had the opportunity to work on a complex microservice using Spring boot and Quarkus to develop a POST endpoint for submitting data to a third-party service. The complexity arose from the distributed nature of the application and the numerous interdependent services (such as Token and Login) required to complete the task.

In any distributed application, several challenges can arise, including:

  • Lack of Familiarity with Interdependent Services: This can lead to missing an understanding of the underlying code.
  • Original Developer Absence: The developers who initially created the services may have moved on to different projects or companies.
  • Extended Development Time: These factors can result in taking a few extra sprints to complete new enhancements to the service.

Many of us rely on IDE debugger and our old favorite logging methods to get the job done. However, in today's article, I would like to introduce a cooler (and often faster) way to debug and develop quality code. After all, who wouldn’t want some extra free time to spend with family?

If you're not familiar with OTel, I recommend reading this https://opentelemetry.io/docs/languages/go/getting-started/ quick article before proceeding.

  1. To set up your Quarkus application for OTel, note that Quarkus supports OpenTelemetry auto-configuration for traces. The configurations align with what you find in the OpenTelemetry SDK Autoconfigure, with the usual quarkus.* prefix added.

 

 

 

quarkus.application.name=$yourServiceName
quarkus.http.port=8090
quarkus.opentelemetry.enabled=true
quarkus.opentelemetry.tracer.enabled=true
quarkus.opentelemetry.tracer.exporter.otlp.endpoint=http://localhost:9090
quarkus.opentelemetry.tracer.exporter.otlp.endpoint=http://10.209.44.220:4317
quarkus.opentelemetry.tracer.log-trace-context.enabled=true
quarkus.opentelemetry.tracer.log-trace-context.span-log-attribute=spanId
quarkus.opentelemetry.tracer.log-trace-context.trace-log-attribute=traceId
quarkus.log.console.level=DEBUG
#quarkus.log.console.format=%d{HH:mm:ss} %-5p traceId=%X{traceId}, spanId=%X{spanId}, sampled=%X{sampled} [%c{2.}] (%t) %s%e%n
quarkus.log.console.json=true
quarkus.otel.resource.attributes=true

 

 

 

2. Add quarkus-opentelemetry dependence in your project pom.xml https://quarkus.io/extensions/io.quarkus/quarkus-opentelemetry-exporter-otlp/

 

 

 

 <dependency>
       <groupId>io.quarkus</groupId>
       <artifactId>quarkus-opentelemetry</artifactId>
 </dependency>

 

 

 

3. Run distributed tracing observability platforms, such as Jaeger, Zipkin, or Splunk, locally. This step is a game-changer in the development process.

Observability platforms are resource-light but provide a wealth of in-context traceability information, helping you understand and debug your code more effectively.

Use following docker run command to start the jaeger all-in-one container on your local. This is a one time setup on your local to get backend to view the traces.

 

 

 

docker run --rm --name jaeger \
  -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
  -p 6831:6831/udp \
  -p 6832:6832/udp \
  -p 5778:5778 \
  -p 16686:16686 \
  -p 4317:4317 \
  -p 4318:4318 \
  -p 14250:14250 \
  -p 14268:14268 \
  -p 14269:14269 \
  -p 9411:9411 \
  jaegertracing/all-in-one:1.57

 

 

 

More - https://www.jaegertracing.io/docs/1.57/getting-started/

1717430597694.png

You are all set to embark on a modern distributed development environment.

Use http://localhost:16686/ to access OTel data. Think of it as accessing logs but in a much more digestible format.

Thanks to OTel auto-instrumentation, you'll gain valuable insights into your applications.

1717431685854.png

As you can see, the above screenshot provides all the integration points of your application (Service A), including the status and execution time across these touchpoints. Voila!!!

All of this is available with ZERO extra lines of code. As a developer, you can immediately appreciate the power of the telemetry data for your service. Compare this with sifting through logs to pinpoint issues during development—it's a game-changer!

So far, so good 👍

Now, I want to demonstrate another powerful addition you can include in your application: OTEl manual instrumentation.

It's easy to incorporate into your development process. Instead of using logger (debug, info), you can use Span and set attributes (e.g., request map).

Here is a sample code to do exactly that:

 

 

 

....
import io.opentelemetry.api.trace.Span;
import io.opentelemetry.api.trace.StatusCode;
import io.opentelemetry.instrumentation.annotations.WithSpan;
....

@WithSpan
public DataResponse postFromData(Map<String, Object> requestMap) {
  DataResponse dataResponse = new DataResponse();
  FormPostResponse responseEntity = null;
  Span span = Span.current();

  ...... try {
    requestBody = objectMapper.writeValueAsString(requestMap);
  } catch (JsonProcessingException ex) {
    log.error(
        "postFromData: Failed to create the  request. Exception:{} ",
        ex.getMessage());
    span.setStatus(StatusCode.ERROR, "Post Data Exception");
     }

  try {
    HiddenFormPostRequest marketoFormPostRequest =
        createMarketoHiddenPostRequest(requestMap);
    String marketoFormPostRequestJson =
        objectMapper.writeValueAsString(marketoFormPostRequest);
    log.debug("createMarketoPostRequestFormRequestBody: formPostRequest {}",
        marketoFormPostRequestJson);
    responseEntity =
        marketoRestService.submitFormDataToMaketo(marketoFormPostRequestJson);
    return marketoDataResponse;
  } catch (Exception ex) {
    log.error(
        "postHiddenFromData: Failed to create the POST request. Request: {}, Exception:{} ", ex.getMessage());
  
    getResponse(dataResponse, HttpStatus.SC_INTERNAL_SERVER_ERROR,
        INTERNAL_SERVER_ERROR, null, ex.getMessage());
    span.setStatus(StatusCode.ERROR, "Data Exception");
                        span.recordException( ErrorCode.POST_ERROR,"Data Exception");
  }
  span.setStatus(StatusCode.OK, "Dada Form Success");
  span.setAttribute("request ", requestBody);
  return dataResponse;
}

 

 

 

You may have noticed a significant difference between logging and tracing. Tracing makes it much easier to attach your custom logic to spans for observability.

 

 

 

try{
......
...

} catch (Exception ex) {
   dataResponse.setMessage(Response.Status.FORBIDDEN.getReasonPhrase());
    span.setStatus(StatusCode.ERROR, "Form Data Exception");
    span.setAttribute("hiddenForm contactId", contactId)
    span.recordException( new CommonsException(ErrorCode.INVALID_INPUT,"Contact Id  is Empty or not Valid"));
                .......
}

 

 

 

Now, you can use the data via APM to troubleshoot and understand the code. OTel Span This allows you to verify that all critical paths of functionality across services are covered by the code.

1717437665579.png

By using manual instrumentation, you gain more control over what gets traced and logged, enhancing your ability to debug and monitor your application's performance.

I should stop here. If I've excited you enough, you might be interested in the .conf24 session lineup on OTel at this year's .conf:

OBS1125B - Take Your Splunk® Observability Game to the Next Level with Tags

Derek Mitchell , Global Observability Specialist, Splunk

DEM1978 - OpenTelemetry™: Own Your Data. Improve Performance. Deliver Software Faster.

OBS1875C - Adopting OpenTelemetry at Yahoo: The Good, The Bad and The Ugly

Click on for the all OTel sessions this year at .conf!

You can also watch these sessions online from the comfort of your home! Take a look here and see you at .conf!

Get Updates on the Splunk Community!

Splunk App for Anomaly Detection End of Life Announcment

Q: What is happening to the Splunk App for Anomaly Detection?A: Splunk is officially announcing the ...

Aligning Observability Costs with Business Value: Practical Strategies

 Join us for an engaging Tech Talk on Aligning Observability Costs with Business Value: Practical ...

Mastering Data Pipelines: Unlocking Value with Splunk

 In today's AI-driven world, organizations must balance the challenges of managing the explosion of data with ...