Hello Everyone,
I'm currently exploring the Splunk Observability Cloud to send log data. From the portal, it appears there are only two ways to send logs: via Splunk Enterprise or Splunk Cloud.
I'm curious if there's an alternative method to send logs using the Splunk HTTP Event Collector (HEC) exporter. According to the documentation here, the Splunk HEC exporter allows the OpenTelemetry Collector to send traces, logs, and metrics to Splunk HEC endpoints, supporting traces, metrics, and logs. Is it also possible to use fluentforward, otlphttp, or signalfx or anything else for this purpose?
Additionally, I have an EC2 instance running the splunk-otel-collector service, which successfully sends infrastructure metrics to the Splunk Observability Cloud. Can this service also facilitate sending logs to the Splunk Observability Cloud?
According to the agent_config.yaml file provided bysplunk-otel-collector service, there are several pre-configured service settings related to logs, including logs/signalfx, logs/entities, and logs. These configurations utilize different exporters such as splunk_hec, splunk_hec/profiling, otlphttp/entities, and signalfx.
Could you explain what each of these configurations is intended to do?
service:
extensions: [health_check, http_forwarder, zpages, smartagent]
pipelines:
traces:
receivers: [jaeger, otlp, zipkin]
processors:
- memory_limiter
- batch
- resourcedetection
#- resource/add_environment
exporters: [otlphttp, signalfx]
# Use instead when sending to gateway
#exporters: [otlp/gateway, signalfx]
metrics:
receivers: [hostmetrics, signalfx, statsd]
processors: [memory_limiter, batch, resourcedetection]
exporters: [signalfx, statsd]
# Use instead when sending to gateway
#exporters: [otlp/gateway]
metrics/internal:
receivers: [prometheus/internal]
processors: [memory_limiter, batch, resourcedetection, resource/add_mode]
# When sending to gateway, at least one metrics pipeline needs
# to use signalfx exporter so host metadata gets emitted
exporters: [signalfx]
logs/signalfx:
receivers: [signalfx, smartagent/processlist]
processors: [memory_limiter, batch, resourcedetection]
exporters: [signalfx]
logs/entities:
# Receivers are dynamically added if discovery mode is enabled
receivers: [nop]
processors: [memory_limiter, batch, resourcedetection]
exporters: [otlphttp/entities]
logs:
receivers: [fluentforward, otlp]
processors:
- memory_limiter
- batch
- resourcedetection
#- resource/add_environment
exporters: [splunk_hec, splunk_hec/profiling]
# Use instead when sending to gateway
#exporters: [otlp/gateway]
Thanks!
Let’s think about this from 2 perspectives: sending logs and ingesting logs.
Splunk Enterprise and Splunk Cloud are where logs are ingested so you can send logs there using any method you prefer. There are countless ways to send logs; some examples include Splunk universal forwarder, OpenTelemetry collector, and fluentd. With the OTel collector, you choose which receiver to use to collect logs such as the filelog or otlp receivers. The OTel collector uses exporters to send those logs to a logging backend like Splunk Enterprise/Cloud.
Splunk Observability Cloud ingests metrics and traces and it uses an integration called Log Observer Connect to read logs from Splunk Cloud/Enterprise and display and correlate them to metrics and traces so you can see all 3 signals in one place.
In the OTel yaml you shared, that is your pipeline configuration where you’re telling an OTel collector how to receive, process, and export your telemetry. For example, in your “logs” pipeline, you’re receiving logs from the fluentforward and otlp receivers, your processing those logs with memory_limiter, batch, and resourcedetection processors, and then exporting log data to splunk_hec and splunk_hec/profiling endpoints.
The splunk_hec exporter represents an http event collector endpoint on Splunk Cloud/Enterprise and the splunk_hec/profiling endpoint represents a special Observability Cloud endpoint dedicated for code profiling data (not typical logs, but still technically logs).
Let’s think about this from 2 perspectives: sending logs and ingesting logs.
Splunk Enterprise and Splunk Cloud are where logs are ingested so you can send logs there using any method you prefer. There are countless ways to send logs; some examples include Splunk universal forwarder, OpenTelemetry collector, and fluentd. With the OTel collector, you choose which receiver to use to collect logs such as the filelog or otlp receivers. The OTel collector uses exporters to send those logs to a logging backend like Splunk Enterprise/Cloud.
Splunk Observability Cloud ingests metrics and traces and it uses an integration called Log Observer Connect to read logs from Splunk Cloud/Enterprise and display and correlate them to metrics and traces so you can see all 3 signals in one place.
In the OTel yaml you shared, that is your pipeline configuration where you’re telling an OTel collector how to receive, process, and export your telemetry. For example, in your “logs” pipeline, you’re receiving logs from the fluentforward and otlp receivers, your processing those logs with memory_limiter, batch, and resourcedetection processors, and then exporting log data to splunk_hec and splunk_hec/profiling endpoints.
The splunk_hec exporter represents an http event collector endpoint on Splunk Cloud/Enterprise and the splunk_hec/profiling endpoint represents a special Observability Cloud endpoint dedicated for code profiling data (not typical logs, but still technically logs).
Thank you for the detailed explanation; I truly appreciate it.