We’ve already explored a few topics around observability in a Kubernetes environment -- Common Failures in a Kubernetes Environment, how to Detect and Resolve Issues in a Kubernetes Environment, and the process of Integrating Kubernetes and Splunk Observability Cloud. But even though the process of setting up the OpenTelemetry Collector and exporting telemetry data to a backend observability platform like Splunk Observability Cloud can be completed in a few short steps, these initial setup steps don’t account for what happens when new workloads are spun up. Kubernetes is known for its dynamic nature, and observability needs to move right along with the ever-changing environment. In this post, we’ll see how monitoring Kubernetes workloads is currently done. Then we’ll check out how the new Kubernetes annotation-based discovery for the OpenTelemetry Collector makes it even easier.
Monitoring Workloads: The Traditional Approach
Monitoring dynamic workloads typically requires manual definitions of conditional configurations. Engineers define configuration “templates” based on environment conditions that adjust the monitoring configuration based on what’s automatically discovered in the environment.
For example, to dynamically discover and monitor nginx pods that are deployed on the cluster, a receiver_creator definition is required:
This configuration is enabled when a pod is discovered via the Kubernetes API that exposes port 80 (HTTP port) and matches the nginx keyword.
This approach works, but it requires a defined Collector configuration for each workload type. If new workload types are added to the environment, the configuration needs to be updated and redeployed, which can be time-consuming – especially if different teams are responsible for different aspects of the deployment. Additionally, it relies on the name of the pod, limiting flexibility to name workloads after what they actually do, making it harder to troubleshoot and diagnose problems.
Monitoring Workloads: The New Approach
The recently added Kubernetes annotation-based discovery for the OpenTelemetry Collector now allows for the dynamic configuration of monitoring workloads by adding annotations directly to their pods or namespaces. The K8s observer detects and reports information about objects appearing in the cluster to the Collector’s receiver_creator and automatically adjusts its configuration to monitor the annotated workloads. The annotations define the receiver that’s used to scrape telemetry data.
Here’s an example pod annotation that tells the OpenTelemetry receiver_creator to scrape metrics and enables discovery on the pod:
Finally, the discovery functionality must be enabled in the receiver_creator:
Once the deployment changes are applied and the Collector is restarted, you’re done!
Adding a new nginx workload? No problem. Annotate your pods as shown above to dynamically configure monitoring for your new workload. It doesn’t matter what the pod is named and enables you to be more declarative about what is collected and how.
Wrap up
The amount of configuration maintenance required significantly decreases thanks to this new annotation-based discovery for Kubernetes and the OpenTelemetry Collector. Kubernetes observability becomes as dynamic as the environment it monitors, and configuration of dynamic workload discovery becomes a seamless experience.
Already using the OpenTelemetry Collector in Kubernetes and want to try out this new feature? Want to export your automatically discovered Kubernetes workloads and their telemetry data to a unified observability platform? Check out Splunk Observability Cloud’s free 14-day trial.
Resources
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.