In application performance monitoring, saturation is defined as the total load on a system or how much of a given resource is consumed at a time. If saturation is at 100%, your system is running at 100% capacity. This is generally a bad thing. Agent saturation is a similar concept. It represents the percentage of available system resources currently being monitored by an observability agent. 100% agent saturation means 100% of available resources are instrumented with observability agents, which is a great thing. When it comes to observability practices, agent saturation is how well a system is instrumented and can be represented by:
( instrumented resources / total resources ) x 100
Because greater visibility into system health and performance means proactive detection of issues, improved user experience, more efficient troubleshooting, decreased downtime, and countless other pluses, 100% agent saturation is the ultimate goal.
So why doesn’t everyone get to 100% agent saturation for full system observability and magical unicorn application visibility status? It’s challenging! Setting up observability agents across distributed applications and environments takes time. In ephemeral, dynamic systems already integrated into existing solutions, it can just be too much of a lift.
But the good news is that if you’re already using Splunk (maybe for logging and security) there are quick and easy ways to improve your system observability. In this post, we’re going to look at how to leverage the Splunk Add-on for OpenTelemetry Collector to gain a quick win when it comes to improving agent saturation.
For Splunk Enterprise or Splunk Cloud Platform customers who ingest logs using universal forwarders, you can quickly improve agent saturation and deploy, update, and configure OpenTelemetry Collector agents in the same way you do any of your other technology add-ons (TAs). The Splunk Add-on for OpenTelemetry Collector leverages your existing Splunk Platform and Splunk Cloud deployment mechanisms (specifically the universal forwarder and the deployment server) to deploy the OpenTelemetry Collector and its capabilities for increased visibility into your system from Splunk Observability Cloud. The add-on is a version of the Splunk Distribution of the OpenTelemetry Collector that simplifies configuration, management, and data collection of metrics and traces. This means OpenTelemetry instrumentation will out-of-the-box exist anywhere the universal forwarder is present for logs and security use cases, making it easier to instrument systems quickly and gain visibility into telemetry data from within Splunk Observability Cloud. This comprehensive system coverage also comes with out-of-the-box Collector content and configuration with Splunk-specific metadata and optimizations (like batching, compression, and efficient exporting), all preconfigured. This means that you can get answers using observability data faster, saving you time and effort.
Prerequisites for using the Splunk Add-on for OpenTelemetry Collector include:
The Splunk Add-on for OpenTelemetry Collector is available on Splunkbase similar to other TAs, and you can deploy it alongside universal forwarders using existing Splunk tools like the deployment server.
We have a Linux EC2 instance we’re going to be instrumenting, but we first need to download the Splunk Add-On for OpenTelemetry Collector from Splunkbase:
We’ll unzip the package and then create a local folder and copy over the config credential files:
In Splunk Observability Cloud, we’ll get the access token and the realm for our Splunk Observability Cloud organization:
Your organization's realm can be found under your user’s organizations:
Next, we set these values in our /local/access_token file:
We then need to make sure the Splunk Add-On for OpenTelemetry folder (Splunk_TA_otel) is in the deployments app folder on the deployment server instance:
We’ll then move over to the deployment server UI in Splunk Enterprise to create the Splunk_TA_otel server class and add the relevant hosts along with the Splunk_TA_otel app. Once the TA is installed make sure you check both Enable App and Restart Splunkd and select Save:
That’s it! If we now navigate to Splunk Observability Cloud, we’ll see the telemetry data flowing in from our EC2 instance:
Increasing agent saturation and improving observability for comprehensive system insight can be quick and easy. Not sure how you’re currently doing in terms of agent saturation? Check out our Measuring & Improving Observability-as-a-Service blog post to learn how to set KPIs on agent saturation. Ready to improve your agent saturation? Sign up for a Splunk Observability Cloud 14-day free trial, integrate the Splunk Add-on for OpenTelemetry Collector, and start on your journey to 100% agent saturation.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.