We need end-to-end insight into our application environments to confidently ensure everything is up and running so that our customers stay ridiculously happy. We need to be able to monitor, troubleshoot, anticipate, and detect issues. We need to do it intuitively, quickly, and efficiently so that when problems arise, time to resolution is as short as possible. There are plenty of solutions out there, but in this post, we’re going to learn how to quickly and easily integrate and configure our Kubernetes application with Splunk Observability Cloud using the Splunk Distribution of the OpenTelemetry Collector for Kubernetes.
Splunk Observability Cloud is Splunk’s full-stack, OpenTelemetry-native observability offering. Complete with Application Performance Monitoring, Infrastructure Monitoring, Real User Monitoring, Synthetic Monitoring, and Log Observer Connect, Splunk Observability Cloud provides a centralized, detailed view of the systems functioning in your environment, no matter where hosted – on-premises, in private or public clouds, or even in serverless environments. This visibility not only helps you pinpoint the source of problems faster, but it also allows you to collect your telemetry data using OpenTelemetry so you can send your standardized data where you want when you want – no vendor lock-in required.
Ok. This sounds complicated. Observability across digital architecture and multiple Splunk products? Plus Kubernetes? We promise it’s easy. Let’s walk through it together.
If you don’t already have a Splunk account, try Splunk Observability Cloud free for 14 days! Just fill in a few details, verify your email address, and voila! Upon verification, you’ll be taken directly to Splunk Observability Cloud’s home page.
From the home page, open up the Data Management section on the left toolbar.
Select Add Integration located at the top right of the screen:
Once on the Available integrations screen, you can deploy the Splunk OpenTelemetry Collector, select one of the other supported integrations, or follow the Guided Onboarding process located at the top right of the screen:
If you don’t have an application ready to integrate, the Guided Onboarding process allows you to try out Splunk Observability Cloud with an existing sample application.
Since we already have an application ready to go, we’re going to scroll down to the Platforms section and select the Kubernetes integration wizard. The wizard will guide you through setting up the Splunk OpenTelemetry Collector for Kubernetes. At the end of this process, we’ll have a deployed OpenTelemetry Collector that collects metrics, traces, and logs. As stated in the wizard description, the Splunk Distribution of the OpenTelemetry Collector for Kubernetes is packed in a container and deployed as a DaemonSet on each node in your Kubernetes cluster.
Walking through the steps in the installation wizard, we’ll fill out our Install Configuration:
Moving to the next step, we’ll use Helm (3.x) to install the Collector following the steps in the Installation Instructions:
The output from the kubectl get pods and the helm install splunk-otel-collector commands should look something like this:
Also in this step, you can optionally add annotations to enable auto-instrumentation following the steps in the setup wizard.
Seriously, that’s it! Data from your Kubernetes cluster is now flowing into Splunk Observability Cloud!
Selecting Explore Metric Data takes you right into the Kubernetes Navigator so you can start interacting with your data.
As we said, installing the Splunk Distribution of the OpenTelemetry Collector for Kubernetes deploys an agent component on every node in your Kubernetes cluster. Automatic discovery and configuration automatically finds the supported applications running in your Kubernetes environments, grabs telemetry data from them, and that data is sent to the Collector. The Collector processes this data and forwards it to Splunk Observability Cloud.
We installed the Collector via Helm Chart and set some values in our OTel configuration file when we ran the helm install command. We provided our Splunk access token and realm in this command, along with the environment and cluster name. These variables were then set in the Chart’s values.yaml file and passed into templates to dynamically generate Kubernetes manifests, (in the files under the rendered_maifests directory), for the Collector throughout the cluster.
All configurable parameters are listed in the values.yaml file. You can either set these values during the initial helm install, post-install by using the helm upgrade command, or by updating these values directly in the values.yaml file and redeploying the Helm Chart. Here, I’m updating the values of gateway.enabled to true and splunkObservability.profilingEnabled to false using the helm upgrade command:
Note: along with the helm upgrade --set command, you’ll need to provide any values previously set along with any new or updated parameters. You’ll notice I also specified access token, realm, and cluster name, even though I just wanted to update profiling and gateway values. For this reason, if you’re configuring a long list of values, it’s probably best to set them directly in the values.yaml or use your shell history to append/update parameters.
Hooking up your Kubernetes environment to Splunk Observability Cloud is literally a 2-step process. Should you need to update your Collector configuration, it’s just as easy. Take it for a spin yourself with your existing Splunk account or try Splunk Observability Cloud free for 14 days.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.