Community Blog
Get the latest updates on the Splunk Community, including member experiences, product education, events, and more!

Integrating Kubernetes and Splunk Observability Cloud

CaitlinHalla
Splunk Employee
Splunk Employee

We need end-to-end insight into our application environments to confidently ensure everything is up and running so that our customers stay ridiculously happy. We need to be able to monitor, troubleshoot, anticipate, and detect issues. We need to do it intuitively, quickly, and efficiently so that when problems arise, time to resolution is as short as possible. There are plenty of solutions out there, but in this post, we’re going to learn how to quickly and easily integrate and configure our Kubernetes application with Splunk Observability Cloud using the Splunk Distribution of the OpenTelemetry Collector for Kubernetes.   

Why Splunk Observability Cloud?

Splunk Observability Cloud is Splunk’s full-stack, OpenTelemetry-native observability offering. Complete with Application Performance Monitoring, Infrastructure Monitoring, Real User Monitoring, Synthetic Monitoring, and Log Observer Connect, Splunk Observability Cloud provides a centralized, detailed view of the systems functioning in your environment, no matter where hosted – on-premises, in private or public clouds, or even in serverless environments. This visibility not only helps you pinpoint the source of problems faster, but it also allows you to collect your telemetry data using OpenTelemetry so you can send your standardized data where you want when you want – no vendor lock-in required. 

Ok. This sounds complicated. Observability across digital architecture and multiple Splunk products? Plus Kubernetes? We promise it’s easy. Let’s walk through it together. 

Step 1: Sign up for Splunk Observability Cloud Free Trial

If you don’t already have a Splunk account, try Splunk Observability Cloud free for 14 days! Just fill in a few details, verify your email address, and voila! Upon verification, you’ll be taken directly to Splunk Observability Cloud’s home page. 

Step 2: Integrate your data

From the home page, open up the Data Management section on the left toolbar.

splunk home.png

Select Add Integration located at the top right of the screen: 

add integration.png

Once on the Available integrations screen, you can deploy the Splunk OpenTelemetry Collector, select one of the other supported integrations, or follow the Guided Onboarding process located at the top right of the screen: 

Available integrations.png

If you don’t have an application ready to integrate, the Guided Onboarding process allows you to try out Splunk Observability Cloud with an existing sample application. 

Since we already have an application ready to go, we’re going to scroll down to the Platforms section and select the Kubernetes integration wizard. The wizard will guide you through setting up the Splunk OpenTelemetry Collector for Kubernetes. At the end of this process, we’ll have a deployed OpenTelemetry Collector that collects metrics, traces, and logs. As stated in the wizard description, the Splunk Distribution of the OpenTelemetry Collector for Kubernetes is packed in a container and deployed as a DaemonSet on each node in your Kubernetes cluster. 

Walking through the steps in the installation wizard, we’ll fill out our Install Configuration

CaitlinHalla_11-1722285890798.png

Moving to the next step, we’ll use Helm (3.x) to install the Collector following the steps in the Installation Instructions

CaitlinHalla_12-1722285890799.png

The output from the kubectl get pods and the helm install splunk-otel-collector commands should look something like this: 

CaitlinHalla_13-1722285890802.png

Also in this step, you can optionally add annotations to enable auto-instrumentation following the steps in the setup wizard. 

Step 3: You’re done!

Seriously, that’s it! Data from your Kubernetes cluster is now flowing into Splunk Observability Cloud!

CaitlinHalla_14-1722285890798.png

Selecting Explore Metric Data takes you right into the Kubernetes Navigator so you can start interacting with your data. 

So what’s actually happening here? 

As we said, installing the Splunk Distribution of the OpenTelemetry Collector for Kubernetes deploys an agent component on every node in your Kubernetes cluster. Automatic discovery and configuration automatically finds the supported applications running in your Kubernetes environments, grabs telemetry data from them, and that data is sent to the Collector. The Collector processes this data and forwards it to Splunk Observability Cloud. 

But when and how are all of these things configured? 

We installed the Collector via Helm Chart and set some values in our OTel configuration file when we ran the helm install command. We provided our Splunk access token and realm in this command, along with the environment and cluster name. These variables were then set in the Chart’s values.yaml file and passed into templates to dynamically generate Kubernetes manifests, (in the files under the rendered_maifests directory), for the Collector throughout the cluster. 

All configurable parameters are listed in the values.yaml file. You can either set these values during the initial helm install, post-install by using the helm upgrade command, or by updating these values directly in the values.yaml file and redeploying the Helm Chart. Here, I’m updating the values of gateway.enabled to true and splunkObservability.profilingEnabled to false using the helm upgrade command: 

CaitlinHalla_15-1722285890801.png

Note: along with the helm upgrade --set command, you’ll need to provide any values previously set along with any new or updated parameters. You’ll notice I also specified access token, realm, and cluster name, even though I just wanted to update profiling and gateway values. For this reason, if you’re configuring a long list of values, it’s probably best to set them directly in the values.yaml or use your shell history to append/update parameters. 

Wrap Up

Hooking up your Kubernetes environment to Splunk Observability Cloud is literally a 2-step process. Should you need to update your Collector configuration, it’s just as easy. Take it for a spin yourself with your existing Splunk account or try Splunk Observability Cloud free for 14 days

Resources

Get Updates on the Splunk Community!

Good Sourcetype Naming

When it comes to getting data in, one of the earliest decisions made is what to use as a sourcetype. Often, ...

See your relevant APM services, dashboards, and alerts in one place with the updated ...

As a Splunk Observability user, you have a lot of data you have to manage, prioritize, and troubleshoot on a ...

Splunk App for Anomaly Detection End of Life Announcement

Q: What is happening to the Splunk App for Anomaly Detection?A: Splunk is officially announcing the ...