You might have seen Splunk’s recent announcement about donating the OpenTelemetry Injector to the community.
If you haven’t, it’s worth a read. It shares how Splunk and Dash0 partnered to make OpenTelemetry easier to adopt across every environment.
In this post, I’ll unpack how the Injector works, how it differs from the OpenTelemetry Operator and Automatic Discovery, and why it’s a big step forward for enterprises looking to extend observability across all their workloads, not just the containerized ones.
If you’re like many large IT organizations, your environment spans cloud-native applications, virtual machines, and on-prem workloads that make perfect sense to keep where they are.
Some systems are decades old, others are brand new but together, they make up the backbone of your business.
And that’s where observability can get complicated.
Different environments often mean different monitoring approaches which can lead to increased tool costs, inconsistent methods for measuring system health, and what I like to call “swivel-chair analysis” switching between dashboards and data sources to piece together what’s really happening.
The OpenTelemetry Injector helps close that gap by extending OpenTelemetry’s reach across all your environments, putting you in control of your data. It brings the same consistency, scalability, and flexibility across on-prem, VM-based, and hybrid workloads that you expect from your cloud-native applications and turns that consistency into deeper, more contextual insights that drive faster analysis and better understanding of system behavior.
Originally developed by Splunk and rewritten in Zig by Dash0 for the OpenTelemetry project, the Injector provides zero-touch auto-instrumentation for host-based workloads.
It’s designed for environments that can’t (or don’t need to) be containerized the types of applications running on VMs, bare-metal servers, or traditional tiered architectures.
Instead of editing application code, rebuilding containers, or writing custom startup logic, the Injector works directly at the Linux host level, automatically attaching the right OpenTelemetry agents when your applications start up.
Here’s what that looks like:
That means no code changes, no complex pipelines, and no manual rebuilds, just consistent telemetry from applications wherever they run.
The diagram below shows how the Injector integrates into the Linux dynamic linking process to automatically load instrumentation before the application starts running:
When a process starts, the operating system links the program and loads its system libraries.
If the OpenTelemetry Injector is configured via LD_PRELOAD or /etc/ld.so.preload, it’s loaded first allowing it to set environment variables and point supported runtimes (Java, Node.js, .NET) to their respective OpenTelemetry agents.
The application then executes normally, but with telemetry automatically emitted to your OpenTelemetry Collector.
Note:
The command below is an example of globally enabling the Injector:
echo /usr/lib/opentelemetry/libotelinject.so | sudo tee -a /etc/ld.so.preload
Always confirm the correct library path and configuration in the official OpenTelemetry Injector documentation, as installation paths may vary across Linux distributions.
In addition to the preload method, the OpenTelemetry Injector also supports systemd-based activation as noted in recent community discussions and blog posts (see Isabella Langan’s article on the OTel website)
This approach is especially useful for administrators who want finer control or operate in environments where global preloads aren’t allowed.
Using a standard systemd drop-in configuration, you can set environment variables that activate the Injector for specific managed services.
You can configure a systemd drop-in like this:
sudo systemctl edit myapp.service
Then add:
[Service]
Environment="LD_PRELOAD=/usr/lib/opentelemetry/libotelinject.so"
After saving, restart the service:
sudo systemctl restart myapp.service
The next time the application starts, the Injector loads automatically, attaches the appropriate language agent, and begins sending telemetry all without modifying the original service configuration
“Wait, isn't this basically what the OpenTelemetry Operator does?”
It’s a fair question, and one I asked myself when I first started learning about the Injector.
Both approaches automate instrumentation, but they work in different layers of the stack.
|
Aspect |
Injector (Host-Level) |
Operator Auto-Instrumentation (Kubernetes) |
Auto-Discovery (Collector Runtime) |
|
Primary Purpose |
Injects environment variables and language agents into host-based processes at startup. |
Adds auto-instrumentation agents to Kubernetes pods automatically. |
Dynamically detects workloads/endpoints and configures receivers on the fly. |
|
Where It Runs |
Linux hosts or containers (LD_PRELOAD / /etc/ld.so.preload) |
Kubernetes admission webhook |
Inside the OTel Collector |
|
What It Does |
Configures app runtime env and agent hooks (Java, Node.js, .NET) |
Adds agents + env vars via annotations (inject-java: true) |
Finds and scrapes telemetry endpoints dynamically |
|
Target Use Case |
VM-based services, traditional app tiers, hybrid environments |
Cloud-native environments with centralized consistency |
Dynamic services that appear/disappear quickly |
|
Control Surface |
/etc/opentelemetry/otelinject.conf |
Pod annotations + Instrumentation CRD |
Collector discovery / observers config |
|
Goal Summary |
Make the app emit telemetry. |
Make the cluster emit telemetry consistently. |
Find and collect telemetry dynamically. |
So in summary:
If you haven’t already, check out Caitlin Halla’s three-part blog series on Automatic Discovery in the Splunk Distribution of the OpenTelemetry Collector.
Automatic Discovery simplifies observability by automatically detecting and monitoring services (like databases, caches, and web servers) as they come online with no YAML wrangling, no restarts, and no manual receiver configuration.
Together, the Injector and Automatic Discovery close two sides of the same loop:
That combination means faster time to value and seamless observability across the entire technology stack.
We’ve already nailed comprehensive visibility across Kubernetes and microservices. But enterprises are still hybrid. There are workloads that make sense to stay on hosts or in multi-tier architectures. The Injector is a big step forward in making those environments first-class citizens of observability.
This contribution from Splunk and Dash0 helps organizations:
It’s about leverage bringing the power of OpenTelemetry everywhere your applications live.
If you’re managing hosts, VMs, or hybrid workloads, the Injector can simplify your rollout dramatically.
Grab it from the OpenTelemetry Injector repository, follow the quick-start guide, and connect it to your existing OpenTelemetry Collector.
If you’re already using Splunk Observability Cloud, you can send that data directly no additional configuration required.
OpenTelemetry has already changed how we think about observability.
With the addition of the Injector, it’s now possible to extend that simplicity to every environment from Kubernetes clusters to traditional servers.
Observability shouldn’t stop at the edge of your cluster.
The OpenTelemetry Injector ensures it doesn’t.
Check out the Splunk announcement if you haven’t yet, and keep an eye on the OpenTelemetry project for ongoing updates.
We’re just getting started.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.