In our observability journey so far, we've built comprehensive instrumentation for our Worms in Space application. In our first post, Manual Instrumentation with Splunk Observability Cloud: The What and Why, we explored the fundamentals of manual instrumentation. Our second post, Manual Instrumentation with Splunk Observability Cloud: How to Instrument Backend Applications, walked through backend instrumentation with OpenTelemetry. In our third post, Manual Instrumentation with Splunk Observability Cloud: How to Instrument Frontend Applications, we added frontend observability with Splunk RUM.
Now, let's enhance our observability architecture by introducing the OpenTelemetry Collector – a vendor-agnostic way to receive, process, and export telemetry data for better security, reliability, and flexibility.
Why Use the OpenTelemetry Collector?
Before we dig into the implementation, let’s understand why the OpenTelemetry Collector is a best practice when it comes to production-ready observability.
Current Architecture: Direct to Cloud
With our current application architecture, telemetry data from our frontend and backend is pushed directly to Splunk Observability Cloud. This approach works, but it has its limitations around:
Improved Architecture: OpenTelemetry Collector
The OpenTelemetry Collector solves these problems and acts as a central gateway that:
Implementing the OpenTelemetry Collector
Now that we know the whys behind using the OpenTelemetry Collector, let’s walk through implementing it in our application step-by-step. The complete code can be found in our Manual Instrumentation GitHub repository.
Step 1: Add OpenTelemetry Collector Service and Configuration
Git commit: ebc5145
First, we create the Collector configuration file, otel-collector-config.yaml:
To get a deeper explanation of the configuration, check out our previous blog post on OpenTelemetry Configuration & Common Problems. Some of the key configuration elements to note are:
The multi-exporter approach in our config ensures optimal data flow for different telemetry types.
Since we’re using Docker in our application, we also need to add the collector to Docker Compose. You could also deploy the collector as a sidecar in K8s deployment or run it through another service orchestrator. Here’s the section we’ll add to our docker-compose.yml:
The Splunk distribution of the OpenTelemetry Collector includes optimizations and additional features specifically for Splunk Observability Cloud. You can choose to use the upstream distribution, but as a Splunk customer, you’ll get support for the Splunk Distribution from our team.
Step 2: Configure the Backend to Send Traces to OpenTelemetry Collector
Git commit: 9da3a37d
Now we need to update our Elixir backend configuration to include the OpenTelemetry Collector in our config/runtime.exs file:
We’ve removed the direct Splunk configuration that set the realm and access token, and instead, pointed the OTLP endpoint to the local Collector service. We can then remove authentication headers, since authentication and routing is now handled in the Collector.
Step 3: Configure Frontend RUM to Send Traces to OpenTelemetry Collector
Git commit: 929ea749
Finally, we can update the frontend RUM configuration to send traces to the OpenTelemetry Collector. To do this, we’ve updated our assets/js/rum.ts to the following:
Note: our beaconEndpoint here points to a locally running Collector in our development environment. To make this code production ready, we could deploy a public Collector endpoint, and then replace this beaconEndpoint value with that new endpoint. Another option would be to deploy the Collector as a sidecar, as we mentioned above.
Optional Step 4 for Brownie points: Add Environment Variable Examples for OpenTelemetry Collector Setup
Git commit: 58826200
This last commit simply adds environment variable examples to our .env.example file to guide users through configuring Splunk Observability Cloud credentials and service settings:
Centralizing this configuration through environment variables keeps them out of the code and makes deployments easier.
Wrap Up
In just four commits, we've successfully integrated the OpenTelemetry Collector into our full-stack application. This architectural improvement provides a production-ready telemetry pipeline that scales with your needs while maintaining the comprehensive observability we've built throughout this series.
By centralizing telemetry collection and processing, we’ve simplified our application observability, improved security, and provided the flexibility needed for evolving and adapting to any future observability requirements.
Best of all, our space worms can continue to schedule their spacewalks with confidence, knowing that every aspect of the application is observable through a robust, scalable telemetry pipeline.
Ready to implement the OpenTelemetry Collector in your own application? Start with the Splunk OpenTelemetry Collector documentation and a Splunk Observability Cloud 14-day free trial to see your telemetry data in action.
Resources
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.