Community Blog
Get the latest updates on the Splunk Community, including member experiences, product education, events, and more!

Troubleshooting Your OpenTelemetry Collector Deployment

atoulme
Splunk Employee
Splunk Employee

This blog post is part of an ongoing series on OpenTelemetry.

For this blog post, we customize a Splunk OTEL Collector configuration file to add a logging exporter.

We are sending metrics to signalfx-forwarder endpoint and instead of sending it to the backend, we are just logging it as debug output.

 

exporters:
  # Traces
  sapm:
    access_token: "${SPLUNK_ACCESS_TOKEN}"
    endpoint: "${SPLUNK_TRACE_URL}"
  # Metrics + Events
  signalfx:
    access_token: "${SPLUNK_ACCESS_TOKEN}"
    api_url: "${SPLUNK_API_URL}"
    ingest_url: "${SPLUNK_INGEST_URL}"
    # Use instead when sending to gateway
    #api_url: http://${SPLUNK_GATEWAY_URL}:6060
    #ingest_url: http://${SPLUNK_GATEWAY_URL}:9943
    sync_host_metadata: true
    correlation:
  # Logs
  splunk_hec:
    token: "${SPLUNK_HEC_TOKEN}"
    endpoint: "${SPLUNK_HEC_URL}"
    source: "otel"
    sourcetype: "otel"
  # Send to gateway
  otlp:
    endpoint: "${SPLUNK_GATEWAY_URL}:4317"
    tls:
      insecure: true
  # Debug
  logging:
    loglevel: debug

service:
  extensions: [health_check, http_forwarder, zpages, memory_ballast]
  pipelines:
    metrics:
      receivers: [smartagent/signalfx-forwarder, signalfx]
      processors: [memory_limiter, batch, resourcedetection]
      # exporters: [signalfx]
      exporters: [logging]
      # Use instead when sending to gateway
      #exporters: [otlp]

 

Check in particular how we modified the metrics pipeline.

Start Splunk OTEL Collector using docker command and start sending output to output.log file, so that you are not overwhelmed by the amount of data on your screen:

 

docker run --rm -e SPLUNK_ACCESS_TOKEN="INGEST_TOKEN_VALUE" \
    -e SPLUNK_HEC_TOKEN="INGEST_TOKEN_VALUE" \
    -e SPLUNK_CONFIG=/etc/collector.yaml -p 13133:13133 -p 14250:14250 \
    -e SPLUNK_BALLAST_SIZE_MIB=2666 -e SPLUNK_MEMORY_LIMIT_MIB=8000\
    -e SPLUNK_API_URL="https://api.us1.signalfx.com" \
    -e SPLUNK_TRACE_URL="https://ingest.us1.signalfx.com/v2/trace" \
    -e SPLUNK_INGEST_URL="https://ingest.us1.signalfx.com" \
    -e SPLUNK_HEC_URL="https://ingest.us1.signalfx.com/v1/log" \
    -p 14268:14268 -p 4317:4317 -p 4318:4318 -p 6060:6060 -p 8888:8888 \
    -p 9080:9080 -p 9411:9411 -p 9943:9943 \
    -v "${PWD}/agent_config.yaml":/etc/collector.yaml:ro \
    --name otelcol quay.io/signalfx/splunk-otel-collector:latest \
    &> output.log

 

Depending on the use case, you can omit exposing some of the ports.  Also, you can update URLs as per your realm, if you are sending data to the backend.

Open up a new command line tab and run the following curl command, to send test metric datapoint to Splunk OTEL Collector:

 

curl -X POST "http://localhost:9080/v2/datapoint" \
      -H "Content-Type: application/json" \
      -H "X-SF-Token: ENTER_TOKEN" \
      -H "Connection: Keep-Alive" \
      -H "Keep-Alive: timeout=5, max=100" \
      -d '{
          "gauge": [
            {
              "metric": "test",
              "value": 1
            }
          ]
        }'

 

You should see the same data in output.log file as following:

 

Resource SchemaURL: https://opentelemetry.io/schemas/1.6.1
Resource attributes:
     -> host.name: STRING(1099f15c9e2b)
     -> os.type: STRING(linux)
ScopeMetrics #0
ScopeMetrics SchemaURL: 
InstrumentationScope  
Metric #0
Descriptor:
     -> Name: test
     -> Description: 
     -> Unit: 
     -> DataType: Gauge
NumberDataPoints #0
Data point attributes:
     -> system.type: STRING(signalfx-forwarder)
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2022-11-03 20:14:24.241657411 +0000 UTC
Value: 1
{"kind": "exporter", "data_type": "metrics", "name": "logging"}

 

Now you can see exactly what is being sent to our backend! You can use this method to send custom metrics to your backend as well.

— Maulik Patel, Professional Services Senior Consultant at Splunk

Get Updates on the Splunk Community!

Digital Resilience Assessment Launch | How prepared are you for disruption?

Disruption is inevitable. The question is – how prepared are you to handle it? In today’s fast-moving digital ...

Buttercup Games: Further Dashboarding Techniques (Part 2)

This series of blogs assumes you have already completed the Splunk Enterprise Search Tutorial as it uses the ...

Index This | What is the next number in the series? 7,645 5,764 4,576…

February 2025 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...