Deployment Architecture

How to configure Splunk opentelemetry collector in kubernetes with an OTLP receiver

Manior
New Member

Hi, I'm new to Splunk and relatively inexperienced with DevOps topics. I have a Splunk Opentelemetry Collector deployed in the new namespace in my Kubernetes cluster. I want to configure a OTLP Receiver to collect application traces via gRPC. I used https://github.com/signalfx/splunk-otel-collector-chart to deploy the collector, I also enabled the OTLP receiver and added a new pipeline to the agent config.

However, I struggle to understand how to send traces to the collector.
As I see in k8s, there are many agents deployed, one for each node

 

$kubectl get pods --namespace splunk
NAME                                                        READY   STATUS    RESTARTS   AGE
splunk-otel-collector-agent-286bf                           1/1     Running   0          172m
splunk-otel-collector-agent-2cp2k                           1/1     Running   0          172m
splunk-otel-collector-agent-2gbhh                           1/1     Running   0          172m
splunk-otel-collector-agent-44ts5                           1/1     Running   0          172m
splunk-otel-collector-agent-6ngvz                           1/1     Running   0          173m
splunk-otel-collector-agent-cpmtg                           1/1     Running   0          172m
splunk-otel-collector-agent-dfx8v                           1/1     Running   0          171m
splunk-otel-collector-agent-f4trw                           1/1     Running   0          172m
splunk-otel-collector-agent-g85cw                           1/1     Running   0          172m
splunk-otel-collector-agent-gz9ch                           1/1     Running   0          172m
splunk-otel-collector-agent-hjbmt                           1/1     Running   0          172m
splunk-otel-collector-agent-lttst                           1/1     Running   0          172m
splunk-otel-collector-agent-lzz4f                           1/1     Running   0          172m
splunk-otel-collector-agent-mcgc8                           1/1     Running   0          173m
splunk-otel-collector-agent-snqg8                           1/1     Running   0          173m
splunk-otel-collector-agent-t2gg8                           1/1     Running   0          171m
splunk-otel-collector-agent-tlsfd                           1/1     Running   0          172m
splunk-otel-collector-agent-tr5qg                           1/1     Running   0          172m
splunk-otel-collector-agent-vn2vr                           1/1     Running   0          172m
splunk-otel-collector-agent-xxxmr                           1/1     Running   0          173m
splunk-otel-collector-k8s-cluster-receiver-6b8f85b9-r5kft   1/1     Running   0          9h

 

I thought I need somehow send trace requests to one of this agents, but I don't see any ingresses or services deployed so that my application can use a DNS name for the collector.

 

$kubectl get services --namespace splunk
No resources found in splunk namespace.
$kubectl get ingresses --namespace splunk
No resources found in splunk namespace.

 

Does it mean I have to add some ingresses/svcs by myself, and Splunk otel-collector helm charts don't include them?

Do you have any recommendations on how I can configure this collector to be able to receive traces from applications from other pods in other namespaces using gRPC requests? It would be nice if I can have one URL that automatically gets routed to the collector agents..

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

[Puzzles] Solve, Learn, Repeat: Matching cron expressions

This puzzle (first published here) is based on matching timestamps to cron expressions.All the timestamps ...

Design, Compete, Win: Submit Your Best Splunk Dashboards for a .conf26 Pass

Hello Splunkers,  We’re excited to kick off a Splunk Dashboard contest! We know that dashboards are a primary ...

May 2026 Splunk Expert Sessions: Security & Observability

Level Up Your Operations: May 2026 Splunk Expert Sessions Whether you are refining your security posture or ...