Splunk Cloud Platform

Service splunk-otel-collector does not start

studero
Engager

Hi,

Since one week, the service "splunk-otel-collector" does not start.

Jul 21 14:00:22 svx-jsp-121i systemd[1]: Started Splunk OpenTelemetry Collector.
Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 2025/07/21 14:00:22 settings.go:483: Set config to /etc/otel/collector/agent_config.yaml
Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 2025/07/21 14:00:22 settings.go:539: Set memory limit to 460 MiB
Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 2025/07/21 14:00:22 settings.go:524: Set soft memory limit set to 460 MiB
Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 2025/07/21 14:00:22 settings.go:373: Set garbage collection target percentage (GOGC) to 400
Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 2025/07/21 14:00:22 settings.go:414: set "SPLUNK_LISTEN_INTERFACE" to "127.0.0.1"
Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 2025-07-21T14:00:22.250+0200#011warn#011envprovider@v1.35.0/provider.go:61#011Configuration references unset environment variable#011{"name": "SPLUNK_GATEWAY_URL"}
Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: Error: failed to get config: cannot unmarshal the configuration: decoding failed due to the following error(s):
Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 'service.telemetry.metrics' decoding failed due to the following error(s):
Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: '' has invalid keys: address
Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 2025/07/21 14:00:22 main.go:92: application run finished with error: failed to get config: cannot unmarshal the configuration: decoding failed due to the following error(s):
Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: 'service.telemetry.metrics' decoding failed due to the following error(s):
Jul 21 14:00:22 svx-jsp-121i otelcol[4083324]: '' has invalid keys: address
Jul 21 14:00:22 svx-jsp-121i systemd[1]: splunk-otel-collector.service: Main process exited, code=exited, status=1/FAILURE
Jul 21 14:00:22 svx-jsp-121i systemd[1]: splunk-otel-collector.service: Failed with result 'exit-code'.
Jul 21 14:00:22 svx-jsp-121i systemd[1]: splunk-otel-collector.service: Service RestartSec=100ms expired, scheduling restart.
Jul 21 14:00:22 svx-jsp-121i systemd[1]: splunk-otel-collector.service: Scheduled restart job, restart counter is at 5.
Jul 21 14:00:22 svx-jsp-121i systemd[1]: Stopped Splunk OpenTelemetry Collector.
Jul 21 14:00:22 svx-jsp-121i systemd[1]: splunk-otel-collector.service: Start request repeated too quickly.
Jul 21 14:00:22 svx-jsp-121i systemd[1]: splunk-otel-collector.service: Failed with result 'exit-code'.
Jul 21 14:00:22 svx-jsp-121i systemd[1]: Failed to start Splunk OpenTelemetry Collector.


I need help
Regards
Olivier

Labels (1)
0 Karma
1 Solution

livehybrid
SplunkTrust
SplunkTrust

Hi @studero 

The error is being caused by misconfiguration in your /etc/otel/collector/agent_config.yaml file. Is it possible you can share (redacted if required) this file?

The service.telemetry.metrics section contains an invalid "address" key based on the logs. As of Collector v0.123.0, the service::telemetry::metrics::address setting is ignored and instead should be configured as:

service:
  telemetry:
    metrics:
      readers:
        - pull:
            exporter:
              prometheus:
                host: '0.0.0.0'
                port: 8888

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

View solution in original post

0 Karma

livehybrid
SplunkTrust
SplunkTrust

Hi @studero 

The error is being caused by misconfiguration in your /etc/otel/collector/agent_config.yaml file. Is it possible you can share (redacted if required) this file?

The service.telemetry.metrics section contains an invalid "address" key based on the logs. As of Collector v0.123.0, the service::telemetry::metrics::address setting is ignored and instead should be configured as:

service:
  telemetry:
    metrics:
      readers:
        - pull:
            exporter:
              prometheus:
                host: '0.0.0.0'
                port: 8888

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

0 Karma

studero
Engager

Hi,

The package of Splunk Otel has been update. But during the update, the configuration file has rename to *.newrpm and create a new one, like a default configuration file.

I have rename the saved file *.newrpm to *.yaml and restart with success the service.

Thanks for your help

Olivier

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Observe and Secure All Apps with Splunk

 Join Us for Our Next Tech Talk: Observe and Secure All Apps with SplunkAs organizations continue to innovate ...

What's New in Splunk Observability - August 2025

What's New We are excited to announce the latest enhancements to Splunk Observability Cloud as well as what is ...

Introduction to Splunk AI

How are you using AI in Splunk? Whether you see AI as a threat or opportunity, AI is here to stay. Lucky for ...