This blog post is part of an ongoing series on SOCK enablement.
In this blog post, I will explain the behavior of SOCK (Splunk Otel Collector for Kubernetes) with the default configuration of values.yaml file. It is a file that contains configurations and variables that get passed to chart templates and dictate how the collector works.
A basic way of working with SOCK is to create a new my_values.yaml file and overwrite some configuration values inside of it before passing it to a chart installer. Today we will discuss the default behavior with minimal configuration.
Values.yaml file consists of nested variables configuring various system parts.
For example, this is a part of the default configuration for the Splunk platform - the core setting responsible for your connection to Splunk:
splunkPlatform:
# Required for Splunk Enterprise/Cloud. URL to a Splunk instance to send data
# to. e.g. "http://X.X.X.X:8088/services/collector/event". Setting this parameter
# enables Splunk Platform as a destination. Use the /services/collector/event
# endpoint for proper extraction of fields.
endpoint: ""
# Required for Splunk Enterprise/Cloud (if `endpoint` is specified). Splunk
# Alternatively the token can be provided as a secret.
# Refer to https://github.com/signalfx/splunk-otel-collector-chart/blob/main/docs/advanced-configuration.md#provide-tokens-as-a-secret
# HTTP Event Collector token.
token: ""
# Name of the Splunk event type index targeted. Required when ingesting logs to Splunk Platform.
index: "main"
# Name of the Splunk metric type index targeted. Required when ingesting metrics to Splunk Platform.
metricsIndex: ""
# Name of the Splunk event type index targeted. Required when ingesting traces to Splunk Platform.
tracesIndex: ""
(...)
As you can see, various values are used to configure Splunk. These default values are used to set up an application without many extra features, just a Kubernetes logs collection. Other features have to be turned on and configured manually. Comments above each entry are used to describe what it does.
As an example, here is a piece of configuration responsible for request timeout:
# HTTP timeout when sending data. Defaults to 10s.
timeout: 10s
As we can see, the timeout for sending an event to Splunk is set to 10 seconds. You can customize this value for your own system if there is a need for a longer timeout with your configuration.
You can take a look at this documentation to get a better idea about the advanced configuration details, but in this post, we will explore a basic workable configuration and what it does.
First, you need to create a my_values.yaml file to overwrite some values in the default configuration. The official documentation explains how to do it: Splunk OpenTelemetry Collector docs. You don’t need many things to run it and a basic configuration file will look something like this:
clusterName: "test_cluster"
splunkPlatform:
endpoint: "https://X.X.X.X:8088/services/collector/event"
token: "00000000-0000-0000-0000-000000000000"
index: "my_index"
insecureSkipVerify: true
You have to overwrite the clusterName value, as it is required to run the application. It will act as an identifier of your k8s cluster and will be attached to every log, metric, and trace sent, as a k8s.cluster.name attribute.
We also have to set the Splunk platform endpoint that will ingest our data and a HEC token that will be used to access it. Refer to this doc on how to set up your HTTP event collector in Splunk.
You don’t have to overwrite the index value - by default logs will be sent to the main index - but it is considered a good practice to do so. In this example, I have changed this value to my_index, an index I created in my instance of Splunk. The logs gathered by SOCK will go to this index.
We are setting the insecureSkipVerify flag to true because we want to skip checking a certificate of our HEC endpoint when sending data over HTTPS in our test environment. If you have a certificate configured you should ignore this flag.
Great, now we have a working configuration file and can test our application! To install it you first need to add a Splunk otel collector chart with this command:
helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart
You should get the response “splunk-otel-collector-chart has been added to your repositories” if everything was correctly installed.
Now that our repo has been added we can install it:
helm install my-splunk-otel-collector --values my_values.yaml splunk-otel-collector-chart/splunk-otel-collector
As we can see we are installing the my-splunk-otel-collector repo that we just added, with configuration --values from the my_values.yaml file that we’ve created.
After running this command we should see a response:
Splunk OpenTelemetry Collector is installed and configured to send data to the Splunk Platform endpoint (...)
That means our chart was installed correctly! Now that it is running it should be sending data to our Splunk instance using our custom configuration.
Another useful command that we can use is this:
helm upgrade --install my-splunk-otel-collector --values my_values.yaml splunk-otel-collector-chart/splunk-otel-collector
It’s very similar to the command we previously used but can be used even when the repo is already installed and then it will upgrade it with any changes that we made in the my_values.yaml file.
By default, only logs will be sent to Splunk, as metrics and traces have to be turned on manually:
logsEnabled: true
metricsEnabled: false
tracesEnabled: false
So if you want to collect metrics or traces, the values (metricsEnabled and tracesEnabled) need to be changed. If you enabled metrics you will also have to specify metricsIndex, or the application won’t run.
If you configured your values correctly and installed your chart, you can now run kubectl get pods command in a console to check if it works. You should see one pod running, something like this:
splunker@test:~/splunk-otel-collector-chart$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-splunk-otel-collector-agent-lxns8 1/1 Running 0 63m
As we can see there is one agent pod running because there is only one node in our cluster. In a real-world scenario, you would probably see more agents as there would be more nodes - one agent running per node.
And in case you enabled metrics you should be able to also see a cluster receiver pod, like this one:
splunker@test:~/splunk-otel-collector-chart$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-splunk-otel-collector-k8s-cluster-receiver-5d754c9fff-4wzrf 1/1 Running 0 104s
my-splunk-otel-collector-agent-fmc9p 1/1 Running 0 104s
If everything is working fine the status field should state “Running”.
So by default our application will send logs (and metrics, traces if enabled) to Splunk. It will also try to resend dropped events in case of failure and use batches to optimize the process of sending data.
There are some other commonly used settings like:
By default logs will go to the “main” index but as mentioned before you can change the index value to send them to a different index. In this example we changed this value to my_index so if you have this index configured in Splunk that’s where data will end up.
The simplest way to see if the data is correctly processed is to filter data by index inside Splunk:
There, you should see events being sent to Splunk in real-time.
If you want to check your metrics you can use mpreview command like this:
We can observe that the system metrics are now being stored in Splunk.
Many other powerful features of SOCK won't be covered in this article, but there are resources that you can use to learn more about them.
You can browse the chart repository and the examples directory to look for ideas for what you can use it for. Reading through values.yaml will also give you a good idea of what can be done with it - all of the settings are described there.
Lastly, this series of blog posts is designed to help you learn about SOCK, I recommend taking a look at our other articles, talking about the subjects of routing and multiline logs! 🙂
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.