Splunk Cloud Platform

Manage Spluk Otel Collectors via Deployment Server

asah
Engager

Hi Splunk Gurus,

We’re currently testing Splunk OpenTelemetry (Otel) Collectors on our Kubernetes clusters to collect logs and forward them to Splunk Cloud via HEC. We’re not using Splunk Observability at this time.

Is it possible to manage or configure these OTel Collectors through the traditional Splunk Deployment Server? If so, could you please share any relevant documentation or guidance?

I came across documentation related to the Splunk Add-on for the OpenTelemetry Collector, but it appears to be focused on Splunk Observability. Any clarification or direction would be greatly appreciated.

Thanks in advance for your support!

Labels (1)
Tags (1)
0 Karma
1 Solution

livehybrid
SplunkTrust
SplunkTrust

Hi @asah 

No, it isnt possible to use a Splunk Deployment Server (DS) to manage Installations of native Otel collectors currently, the DS can only be used for pushing apps out to Splunk Enterprise/Universal Forwarders.

*HOWEVER* The Splunk Add-on for the OpenTelemetry Collector can be deployed to a Splunk Forwarder (UF/HF) via a Deployment Server, and this app aims to solve this issue and actually allow management of Otel via the DS. 

By deploying the Splunk Distribution of the OpenTelemetry Collector as an add-on, customers wishing to expand to Observability can do so more easily, by taking advantage of existing tooling and know-how about using Splunk Deployment Server or other tools to manage Technical Add-Ons and .conf files. You can now deploy, update, and configure OpenTelemetry Collector agents in the same manner as any technical add-on

 Check out this blog post for more info: https://www.splunk.com/en_us/blog/devops/announcing-the-splunk-add-on-for-opentelemetry-collector.ht...

And also this page on how to configure it: https://docs.splunk.com/observability/en/gdi/opentelemetry/collector-addon/collector-addon-configure...

So in short, whilst you cant manage your existing K8s deployment of Otel, you could switch to using UFs which connect back to your DS and pull their config from there, if you are willing to switch out to a UF...but then if you're going to install a UF to manage Otel, you might as well send the logs via the UF to Splunk Cloud?! (Unless there is another reason you need/want Otel, such as instrumentation).

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

Since you're already testing the Otel collector on your K8s cluster I assume you've already sorted out that side of the deployment process but incase its of any help there are some docs  at https://docs.splunk.com/observability/en/gdi/opentelemetry/collector-linux/collector-linux-intro.htm... and https://docs.splunk.com/observability/en/gdi/opentelemetry/deployment-modes.html which may be useful.

 Regarding Splunk Add-on for the OpenTelemetry Collector, this 

View solution in original post

livehybrid
SplunkTrust
SplunkTrust

Hi @asah 

No, it isnt possible to use a Splunk Deployment Server (DS) to manage Installations of native Otel collectors currently, the DS can only be used for pushing apps out to Splunk Enterprise/Universal Forwarders.

*HOWEVER* The Splunk Add-on for the OpenTelemetry Collector can be deployed to a Splunk Forwarder (UF/HF) via a Deployment Server, and this app aims to solve this issue and actually allow management of Otel via the DS. 

By deploying the Splunk Distribution of the OpenTelemetry Collector as an add-on, customers wishing to expand to Observability can do so more easily, by taking advantage of existing tooling and know-how about using Splunk Deployment Server or other tools to manage Technical Add-Ons and .conf files. You can now deploy, update, and configure OpenTelemetry Collector agents in the same manner as any technical add-on

 Check out this blog post for more info: https://www.splunk.com/en_us/blog/devops/announcing-the-splunk-add-on-for-opentelemetry-collector.ht...

And also this page on how to configure it: https://docs.splunk.com/observability/en/gdi/opentelemetry/collector-addon/collector-addon-configure...

So in short, whilst you cant manage your existing K8s deployment of Otel, you could switch to using UFs which connect back to your DS and pull their config from there, if you are willing to switch out to a UF...but then if you're going to install a UF to manage Otel, you might as well send the logs via the UF to Splunk Cloud?! (Unless there is another reason you need/want Otel, such as instrumentation).

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

Since you're already testing the Otel collector on your K8s cluster I assume you've already sorted out that side of the deployment process but incase its of any help there are some docs  at https://docs.splunk.com/observability/en/gdi/opentelemetry/collector-linux/collector-linux-intro.htm... and https://docs.splunk.com/observability/en/gdi/opentelemetry/deployment-modes.html which may be useful.

 Regarding Splunk Add-on for the OpenTelemetry Collector, this 

isoutamo
SplunkTrust
SplunkTrust
Here is last year conf presentation about using DS with UF and otel collector add on https://conf.splunk.com/files/2024/slides/PLA1117B.pdf

asimit
Path Finder

Hi @asah,

No, the traditional Splunk Deployment Server cannot be used to manage Splunk OpenTelemetry (OTel) Collectors running in Kubernetes clusters. Here's why and what alternatives you should consider:

## Why Deployment Server Won't Work

1. **Different Architecture**: Splunk Deployment Server is designed to manage Splunk-specific components like Universal Forwarders and Heavy Forwarders, which use Splunk's proprietary configuration system. The OpenTelemetry Collector uses a completely different configuration approach.

2. **Kubernetes-Native Components**: OTel Collectors running in Kubernetes are typically deployed as Kubernetes resources (Deployments, DaemonSets, etc.) and follow Kubernetes configuration patterns using ConfigMaps or Secrets.

3. **Configuration Format**: OTel Collectors use YAML configurations with a specific schema that's different from Splunk's .conf files.

## Recommended Approaches for Managing OTel Collectors in Kubernetes

### 1. GitOps Workflow (Recommended)

Use a GitOps approach with tools like:
- Flux or ArgoCD for configuration management
- Store your OTel configurations in a Git repository
- Use Kubernetes ConfigMaps to mount configurations into your collectors

### 2. Helm Charts

The Splunk OpenTelemetry Collector Helm chart provides a manageable way to deploy and configure collectors:

```bash
helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart
helm install my-splunk-otel splunk-otel-collector-chart/splunk-otel-collector \
  --set gateway.enabled=true \
  --set clusterName=my-cluster
```

You can create custom values.yaml files for different environments and manage them in your version control system.

### 3. Kubernetes Operator

For more sophisticated management, consider using an operator pattern. While there isn't an official OTel operator from Splunk yet, you could implement your own custom operator or use community-developed options.

### 4. Configuration Management Tools

Use standard configuration management tools like:
- Ansible
- Terraform
- Puppet/Chef

These can apply configuration changes across your Kubernetes clusters in a controlled manner.

## Practical Example

Here's a simplified workflow for managing OTel configurations in Kubernetes:

1. Store your base collector config in a Git repo:

```yaml
# otel-collector-config.yaml
receivers:
  filelog:
    include: [/var/log/containers/*.log]
    
processors:
  batch:
    timeout: 1s
    
exporters:
  splunk_hec:
    token: "${SPLUNK_HEC_TOKEN}"
    endpoint: "https://your-splunk-cloud-instance.splunkcloud.com:8088"
    
service:
  pipelines:
    logs:
      receivers: [filelog]
      processors: [batch]
      exporters: [splunk_hec]
```

2. Create a ConfigMap in Kubernetes:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-collector-config
  namespace: splunk
data:
  collector.yaml: |
    receivers:
      filelog:
        include: [/var/log/containers/*.log]
    # Rest of config...
```

3. Mount the ConfigMap in your OTel Collector deployment:

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-collector
spec:
  template:
    spec:
      containers:
      - name: otel-collector
        image: otel/opentelemetry-collector-contrib:latest
        volumeMounts:
        - name: config
          mountPath: /etc/otel/config.yaml
          subPath: collector.yaml
      volumes:
      - name: config
        configMap:
          name: otel-collector-config
```

This approach lets you manage configurations in a Kubernetes-native way, with proper version control and rollout strategies.

For more information, I recommend checking the official documentation:
- [Splunk OpenTelemetry Collector for Kubernetes](https://github.com/signalfx/splunk-otel-collector-chart)
- [OpenTelemetry Collector Configuration](https://opentelemetry.io/docs/collector/configuration/)

Please give 👍 for support 😁 happly splunking .... 😎
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Calling All Security Pros: Ready to Race Through Boston?

Hey Splunkers, .conf25 is heading to Boston and we’re kicking things off with something bold, competitive, and ...

Beyond Detection: How Splunk and Cisco Integrated Security Platforms Transform ...

Financial services organizations face an impossible equation: maintain 99.9% uptime for mission-critical ...

Customer success is front and center at .conf25

Hi Splunkers, If you are not able to be at .conf25 in person, you can still learn about all the latest news ...