Deployment Architecture

How to configure Splunk opentelemetry collector in kubernetes with an OTLP receiver

Manior
New Member

Hi, I'm new to Splunk and relatively inexperienced with DevOps topics. I have a Splunk Opentelemetry Collector deployed in the new namespace in my Kubernetes cluster. I want to configure a OTLP Receiver to collect application traces via gRPC. I used https://github.com/signalfx/splunk-otel-collector-chart to deploy the collector, I also enabled the OTLP receiver and added a new pipeline to the agent config.

However, I struggle to understand how to send traces to the collector.
As I see in k8s, there are many agents deployed, one for each node

 

$kubectl get pods --namespace splunk
NAME                                                        READY   STATUS    RESTARTS   AGE
splunk-otel-collector-agent-286bf                           1/1     Running   0          172m
splunk-otel-collector-agent-2cp2k                           1/1     Running   0          172m
splunk-otel-collector-agent-2gbhh                           1/1     Running   0          172m
splunk-otel-collector-agent-44ts5                           1/1     Running   0          172m
splunk-otel-collector-agent-6ngvz                           1/1     Running   0          173m
splunk-otel-collector-agent-cpmtg                           1/1     Running   0          172m
splunk-otel-collector-agent-dfx8v                           1/1     Running   0          171m
splunk-otel-collector-agent-f4trw                           1/1     Running   0          172m
splunk-otel-collector-agent-g85cw                           1/1     Running   0          172m
splunk-otel-collector-agent-gz9ch                           1/1     Running   0          172m
splunk-otel-collector-agent-hjbmt                           1/1     Running   0          172m
splunk-otel-collector-agent-lttst                           1/1     Running   0          172m
splunk-otel-collector-agent-lzz4f                           1/1     Running   0          172m
splunk-otel-collector-agent-mcgc8                           1/1     Running   0          173m
splunk-otel-collector-agent-snqg8                           1/1     Running   0          173m
splunk-otel-collector-agent-t2gg8                           1/1     Running   0          171m
splunk-otel-collector-agent-tlsfd                           1/1     Running   0          172m
splunk-otel-collector-agent-tr5qg                           1/1     Running   0          172m
splunk-otel-collector-agent-vn2vr                           1/1     Running   0          172m
splunk-otel-collector-agent-xxxmr                           1/1     Running   0          173m
splunk-otel-collector-k8s-cluster-receiver-6b8f85b9-r5kft   1/1     Running   0          9h

 

I thought I need somehow send trace requests to one of this agents, but I don't see any ingresses or services deployed so that my application can use a DNS name for the collector.

 

$kubectl get services --namespace splunk
No resources found in splunk namespace.
$kubectl get ingresses --namespace splunk
No resources found in splunk namespace.

 

Does it mean I have to add some ingresses/svcs by myself, and Splunk otel-collector helm charts don't include them?

Do you have any recommendations on how I can configure this collector to be able to receive traces from applications from other pods in other namespaces using gRPC requests? It would be nice if I can have one URL that automatically gets routed to the collector agents..

Labels (1)
0 Karma
Get Updates on the Splunk Community!

Modern way of developing distributed application using OTel

Recently, I had the opportunity to work on a complex microservice using Spring boot and Quarkus to develop a ...

Enterprise Security Content Update (ESCU) | New Releases

Last month, the Splunk Threat Research Team had 3 releases of new security content via the Enterprise Security ...

Archived Metrics Now Available for APAC and EMEA realms

We’re excited to announce the launch of Archived Metrics in Splunk Infrastructure Monitoring for our customers ...