Community Blog
Get the latest updates on the Splunk Community, including member experiences, product education, events, and more!

Auto-Injector for Everything Else: Making OpenTelemetry Truly Universal

msimon-splunk
Splunk Employee
Splunk Employee

You might have seen Splunk’s recent announcement about donating the OpenTelemetry Injector to the community. 

If you haven’t, it’s worth a read. It shares how Splunk and Dash0 partnered to make OpenTelemetry easier to adopt across every environment.

In this post, I’ll unpack how the Injector works, how it differs from the OpenTelemetry Operator and Automatic Discovery, and why it’s a big step forward for enterprises looking to extend observability across all their workloads, not just the containerized ones.

The Enterprise Reality: Hybrid Environments Are Here to Stay

If you’re like many large IT organizations, your environment spans cloud-native applications, virtual machines, and on-prem workloads that make perfect sense to keep where they are.
Some systems are decades old, others are brand new but together, they make up the backbone of your business.

And that’s where observability can get complicated.
Different environments often mean different monitoring approaches  which can lead to increased tool costs, inconsistent methods for measuring system health, and what I like to call “swivel-chair analysis”  switching between dashboards and data sources to piece together what’s really happening.

The OpenTelemetry Injector helps close that gap by extending OpenTelemetry’s reach across all your environments, putting you in control of your data. It brings the same consistency, scalability, and flexibility across on-prem, VM-based, and hybrid workloads that you expect from your cloud-native applications and turns that consistency into deeper, more contextual insights that drive faster analysis and better understanding of system behavior.

Enter the OpenTelemetry Injector

Originally developed by Splunk and rewritten in Zig by Dash0 for the OpenTelemetry project, the Injector provides zero-touch auto-instrumentation for host-based workloads.

It’s designed for environments that can’t (or don’t need to) be containerized  the types of applications running on VMs, bare-metal servers, or traditional tiered architectures.

Instead of editing application code, rebuilding containers, or writing custom startup logic, the Injector works directly at the Linux host level, automatically attaching the right OpenTelemetry agents when your applications start up.

Here’s what that looks like:

  • Installed as a lightweight shared library: libotelinject.so
  • Written in Zig for cross-platform compatibility across glibc and musl
  • Enabled by preloading the library with LD_PRELOAD or configuring it globally in /etc/ld.so.preload
  • Automatically injects environment variables and agent paths for Java, Node.js, and .NET
  • Configured via a single file: /etc/opentelemetry/otelinject.conf

That means no code changes, no complex pipelines, and no manual rebuilds, just consistent telemetry from applications wherever they run.

How It Works

The diagram below shows how the Injector integrates into the Linux dynamic linking process to automatically load instrumentation before the application starts running:

msimonsplunk_0-1762537903025.png

When a process starts, the operating system links the program and loads its system libraries.
If the OpenTelemetry Injector is configured via LD_PRELOAD or /etc/ld.so.preload, it’s loaded first allowing it to set environment variables and point supported runtimes (Java, Node.js, .NET) to their respective OpenTelemetry agents.
The application then executes normally, but with telemetry automatically emitted to your OpenTelemetry Collector.

 Note:
The command below is an example of globally enabling the Injector:

echo /usr/lib/opentelemetry/libotelinject.so | sudo tee -a /etc/ld.so.preload

Always confirm the correct library path and configuration in the official OpenTelemetry Injector documentation, as installation paths may vary across Linux distributions.

Spoiler

Alternative Activation via systemd

In addition to the preload method, the OpenTelemetry Injector also supports systemd-based activation as noted in recent community discussions and blog posts (see Isabella Langan’s article on the OTel website)

This approach is especially useful for administrators who want finer control or operate in environments where global preloads aren’t allowed.

Using a standard systemd drop-in configuration, you can set environment variables that activate the Injector for specific managed services.

You can configure a systemd drop-in like this:

sudo systemctl edit myapp.service

Then add:

[Service]
Environment="LD_PRELOAD=/usr/lib/opentelemetry/libotelinject.so"

After saving, restart the service:

sudo systemctl restart myapp.service

The next time the application starts, the Injector loads automatically, attaches the appropriate language agent, and begins sending telemetry  all without modifying the original service configuration

If You’re Like Me, You Might Be Thinking…

“Wait, isn't this basically what the OpenTelemetry Operator does?”

It’s a fair question, and one I asked myself when I first started learning about the Injector.
Both approaches automate instrumentation, but they work in different layers of the stack.

Aspect

Injector (Host-Level)

Operator Auto-Instrumentation (Kubernetes)

Auto-Discovery (Collector Runtime)

Primary Purpose

Injects environment variables and language agents into host-based processes at startup.

Adds auto-instrumentation agents to Kubernetes pods automatically.

Dynamically detects workloads/endpoints and configures receivers on the fly.

Where It Runs

Linux hosts or containers (LD_PRELOAD / /etc/ld.so.preload)

Kubernetes admission webhook

Inside the OTel Collector

What It Does

Configures app runtime env and agent hooks (Java, Node.js, .NET)

Adds agents + env vars via annotations (inject-java: true)

Finds and scrapes telemetry endpoints dynamically

Target Use Case

VM-based services, traditional app tiers, hybrid environments

Cloud-native environments with centralized consistency

Dynamic services that appear/disappear quickly

Control Surface

/etc/opentelemetry/otelinject.conf

Pod annotations + Instrumentation CRD

Collector discovery / observers config

Goal Summary

Make the app emit telemetry.

Make the cluster emit telemetry consistently.

Find and collect telemetry dynamically.

 

So in summary:

msimonsplunk_1-1762537903028.png

Connecting the Dots: Injector + Automatic Discovery

If you haven’t already, check out Caitlin Halla’s three-part blog series on Automatic Discovery in the Splunk Distribution of the OpenTelemetry Collector.

Automatic Discovery simplifies observability by automatically detecting and monitoring services (like databases, caches, and web servers) as they come online with no YAML wrangling, no restarts, and no manual receiver configuration.

Together, the Injector and Automatic Discovery close two sides of the same loop:

  • The Injector ensures your applications emit telemetry automatically.
  • Automatic Discovery ensures your collectors find and capture it automatically.

That combination means faster time to value and seamless observability across the entire technology stack.

Why This Matters: Extending Observability Everywhere

We’ve already nailed comprehensive visibility across Kubernetes and microservices. But enterprises are still hybrid. There are workloads that make sense to stay on hosts or in multi-tier architectures. The Injector is a big step forward in making those environments first-class citizens of observability.

This contribution from Splunk and Dash0 helps organizations:

  • Extend OpenTelemetry coverage to every environment
  • Reduce operational overhead by removing per-app instrumentation work
  • Standardize data collection through a unified OTel pipeline
  • Integrate on-prem telemetry with cloud-native observability practices

It’s about leverage  bringing the power of OpenTelemetry everywhere your applications live.

Try It Out

If you’re managing hosts, VMs, or hybrid workloads, the Injector can simplify your rollout dramatically.
Grab it from the OpenTelemetry Injector repository, follow the quick-start guide, and connect it to your existing OpenTelemetry Collector.

If you’re already using Splunk Observability Cloud, you can send that data directly  no additional configuration required.

Wrap-Up

OpenTelemetry has already changed how we think about observability.
With the addition of the Injector, it’s now possible to extend that simplicity to every environment  from Kubernetes clusters to traditional servers.

Observability shouldn’t stop at the edge of your cluster.
The OpenTelemetry Injector ensures it doesn’t.

Check out the Splunk announcement if you haven’t yet, and keep an eye on the OpenTelemetry project for ongoing updates.
We’re just getting started.

Contributors
Get Updates on the Splunk Community!

Splunk Mobile: Your Brand-New Home Screen

Meet Your New Mobile Hub  Hello Splunk Community!  Staying connected to your data—no matter where you are—is ...

Introducing Value Insights (Beta): Understand the Business Impact your organization ...

Real progress on your strategic priorities starts with knowing the business outcomes your teams are delivering ...

Enterprise Security (ES) Essentials 8.3 is Now GA — Smarter Detections, Faster ...

As of today, Enterprise Security (ES) Essentials 8.3 is now generally available, helping SOC teams simplify ...