Deployment Architecture

How to configure Splunk opentelemetry collector in kubernetes with an OTLP receiver

Manior
New Member

Hi, I'm new to Splunk and relatively inexperienced with DevOps topics. I have a Splunk Opentelemetry Collector deployed in the new namespace in my Kubernetes cluster. I want to configure a OTLP Receiver to collect application traces via gRPC. I used https://github.com/signalfx/splunk-otel-collector-chart to deploy the collector, I also enabled the OTLP receiver and added a new pipeline to the agent config.

However, I struggle to understand how to send traces to the collector.
As I see in k8s, there are many agents deployed, one for each node

 

$kubectl get pods --namespace splunk
NAME                                                        READY   STATUS    RESTARTS   AGE
splunk-otel-collector-agent-286bf                           1/1     Running   0          172m
splunk-otel-collector-agent-2cp2k                           1/1     Running   0          172m
splunk-otel-collector-agent-2gbhh                           1/1     Running   0          172m
splunk-otel-collector-agent-44ts5                           1/1     Running   0          172m
splunk-otel-collector-agent-6ngvz                           1/1     Running   0          173m
splunk-otel-collector-agent-cpmtg                           1/1     Running   0          172m
splunk-otel-collector-agent-dfx8v                           1/1     Running   0          171m
splunk-otel-collector-agent-f4trw                           1/1     Running   0          172m
splunk-otel-collector-agent-g85cw                           1/1     Running   0          172m
splunk-otel-collector-agent-gz9ch                           1/1     Running   0          172m
splunk-otel-collector-agent-hjbmt                           1/1     Running   0          172m
splunk-otel-collector-agent-lttst                           1/1     Running   0          172m
splunk-otel-collector-agent-lzz4f                           1/1     Running   0          172m
splunk-otel-collector-agent-mcgc8                           1/1     Running   0          173m
splunk-otel-collector-agent-snqg8                           1/1     Running   0          173m
splunk-otel-collector-agent-t2gg8                           1/1     Running   0          171m
splunk-otel-collector-agent-tlsfd                           1/1     Running   0          172m
splunk-otel-collector-agent-tr5qg                           1/1     Running   0          172m
splunk-otel-collector-agent-vn2vr                           1/1     Running   0          172m
splunk-otel-collector-agent-xxxmr                           1/1     Running   0          173m
splunk-otel-collector-k8s-cluster-receiver-6b8f85b9-r5kft   1/1     Running   0          9h

 

I thought I need somehow send trace requests to one of this agents, but I don't see any ingresses or services deployed so that my application can use a DNS name for the collector.

 

$kubectl get services --namespace splunk
No resources found in splunk namespace.
$kubectl get ingresses --namespace splunk
No resources found in splunk namespace.

 

Does it mean I have to add some ingresses/svcs by myself, and Splunk otel-collector helm charts don't include them?

Do you have any recommendations on how I can configure this collector to be able to receive traces from applications from other pods in other namespaces using gRPC requests? It would be nice if I can have one URL that automatically gets routed to the collector agents..

Labels (1)
0 Karma
Get Updates on the Splunk Community!

Industry Solutions for Supply Chain and OT, Amazon Use Cases, Plus More New Articles ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Enterprise Security Content Update (ESCU) | New Releases

In November, the Splunk Threat Research Team had one release of new security content via the Enterprise ...

Index This | Divide 100 by half. What do you get?

November 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...