In our previous post, we walked through integrating our Kubernetes environment with Splunk Observability Cloud using the Splunk Distribution of the OpenTelemetry Collector for Kubernetes. In this post, we’ll look at the general Splunk Distribution of the OpenTelemetry Collector and dive into the configuration for a Collector deployed in host (agent) monitoring mode. We’ll walk through the different pieces of the config so you can easily customize and extend your own configuration. We’ll also talk about common configuration problems and how you can avoid them so that you can seamlessly get up and running with your own OpenTelemetry Collector.
After you’ve installed the OpenTelemetry Collector for Linux or Windows, you can locate configuration files under either the /etc/otel/collector directory for Linux or the \ProgramData\Splunk\OpenTelemetry Collector\agent_config.yaml for Windows. You’ll notice several Collector configuration files live under this directory – a gateway_config used to configure Collectors deployed in data forwarding (gateway) mode, an otlp_config_linux configuration file for exporting OpenTelemtry traces to Splunk, configuration files designed for use with AWS ECS tasks, etc. Because we’re looking at configuring our application’s instrumentation and collecting host and application metrics, we will focus on the agent_config.yaml Collector configuration file. When you open up this config, you’ll notice it’s composed of the following blocks:
In the extensions block of the Collector config, you’ll find components that extend Collector capabilities. This section defines things like health monitoring, service discovery, data forwarding – anything not directly involved with processing telemetry data.
The Splunk Distribution of the OpenTelemetry Collector defines a few default extensions:
Receivers are responsible for getting telemetry data into the Collector. This section of the configuration file is where data sources are configured.
In this example config file, we have several default receivers configured:
Processors receive telemetry data from the receivers and transform the data based on rules or settings. For example, a processor might filter, drop, rename, or recalculate telemetry data.
This is the configuration section that defines what backends or destinations telemetry data will be sent off to.
The service block is where the previously configured components (extensions, receivers, processors, exporters) are enabled within the pipelines.
There are a few problems you might run into when configuring your OTel Collector. Common issues are caused by:
Indentation is a very common problem. Collector configs are in YAML, which is indentation-sensitive. Using a YAML linter can help you verify that indentation has been maintained successfully. The good news is that the Collector fails fast – if the indentation is incorrect, the Collector will not start so you can identify and fix the problem.
If you’ve set up your Collector, but data isn’t appearing in the backend, there’s a high chance receivers are being configured but subsequently aren’t enabled in a pipeline. After each pipeline component is configured, it must be enabled in a pipeline under the service block of the config.
If the specified data type in receivers, exporters, and processors aren’t supported, you’ll encounter an ErrDataTypeIsNotSupproted error. Confirm the pipeline types of the different Collector components to ensure the data types in the config are supported.
You can always ensure your Collector is up and running with the health check extension, which is on by default with the Splunk Distribution of the OpenTelemetry Collector. From your Linux host, open http://localhost:13133. If your Collector service is up and running, you’ll see a status of “Server available”.
You can also monitor all of your Collectors with Splunk Observability Cloud’s built-in dashboard. Data for this dashboard is configured in the metrics/internal section of the configuration file under the Prometheus receiver.
To help you through the configuration of your own OTel Collector, we walked through the config file for the Splunk Distribution of the OpenTelemetry Collector and called out potential problems you might run into with the config. If you don’t already have an OpenTelemetry Collector installed and configured, start your Splunk Observability Cloud 14 day free trial and get started with the Splunk Distribution of the OpenTelemetry Collector.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.