In Part 1 of this series, we covered what Automatic Discovery is and why it’s critical for observability at scale. Now let’s get hands-on. Enabling Automatic Discovery is pretty straightforward. The key to it all is understanding the configuration patterns and security considerations that will save you time and energy down the road. By the end of this post, you’ll have Automatic Discovery running in your Kubernetes environment and know how to avoid the most common setup pitfalls.
Note: this post focuses on Kubernetes deployment using Helm, which is the recommended approach for most production environments. The examples use Helm values configuration that maps to the underlying OpenTelemetry Collector configuration.
Before jumping into configuration, make sure you have:
Note: If you’re running the Collector on Windows, Automatic Discovery of third-party services (databases, caches, message queues) is not currently supported. Instead, you’ll need to use traditional manual receiver configuration. Windows does, however, support zero-code APM instrumentation for backend applications.
When deploying the Splunk Distribution of the OpenTelemetry Collector using Helm, you enable Automatic Discovery through your Helm values configuration. Here's what that basic setup looks like:
The enabled:true flag and featureGates: splunk.continuousDiscovery setting are both required. The feature gate enables the Automatic Discovery interface in Splunk Observability Cloud’s Data Management UI, where you can view all discovered services:
As we discussed in Part 1 of this series, observers are how Automatic Discovery detects new services in your environment. The configuration in Step 1 includes the k8s_observer, which is automatically configured when you enable discovery in Kubernetes.
The k8s_observer watches for:
The auth_type: serviceAccount setting tells the observer to use the Collector’s Kubernetes service account for API access, which is the recommended approach for security and simplicity.
For Linux hosts (non-Kubernetes): use the host_observer instead, which monitors processes and endpoints on the local machine.
Most infrastructure services require authentication. You configure credentials through the properties.receivers section in your Helm values file, referencing environment variables for security:
This configuration applies to all discovered PostgreSQL instances. The Collector will automatically use these credentials when connecting to any PostgreSQL database it discovers in your cluster.
The environment variables referenced in Step 2 are populated from Kubernetes secrets. Add the extraEnvs section to your Helm values to bind secret data to environment variables:
Before deploying, create the Kubernetes secret in the same namespace as your Collector:
This approach keeps credentials secure and separate from your Helm values files, which can be safely stored in version control.
Note: if you’re monitoring multiple database types, like PostgreSQL and MySQL, add these additional databases to the receivers section of your Helm values file, add the additional environment variables to the extraEnvs section, and create additional secrets for each:
With these individual configuration components in place, we can now complete deployment.
To verify that Automatic Discovery is working, you can verify pods are running and watch the Collector logs for Automatic Discovery events:
Output should look something similar to this:
You can also check the Data Management UI in Splunk Observability Cloud, as mentioned in Part 1, to validate Discovered Services.
Automatic Discovery is powerful, but to get the most out of it, here are some best practices we recommend based on what our customers have done.
Start with one or two service types, like PostgreSQL, to understand the discovery patterns in your environment. Once you’re comfortable with how services are detected, credentialed, and monitored, you can scale up to cover additional infrastructure. This incremental approach helps prevent unexpected issues from propagating across the cluster.
Avoid using application or admin accounts for monitoring. Instead, create service-specific read-only users for databases and other infrastructure services. This minimizes risk and ensures your observability setup has just enough access to gather metrics without exposing sensitive data.
Never hardcode credentials in your Helm values or configuration files. Use Kubernetes secrets, and consider integrating external secret managers. This not only improves security but also allows easier credential rotation without redeploying your Collector.
While not required, consistent Kubernetes labels on your workloads make discovered services easier to organize and track in Splunk Observability Cloud. Labels also help when setting up dashboards, alerts, or filtering metrics for specific clusters, namespaces, or environments.
Set up alerts or dashboards to track the health of your discovery extension. Watch for failed discovery attempts, authentication issues, network connectivity problems or unexpected disappearance of services. Catching issues early prevents blind spots in your monitoring.
Maintain clear records of what services are being monitored, where credentials come from, and how receivers are configured. Include team ownership, escalation paths, and any customizations. This documentation is invaluable when onboarding new team members or troubleshooting incidents. This information can be provided in runbooks with your incident response tool (such as Splunk On-Call), making it easier for you to solve the inevitable issues that arise without having to spend time searching for this data.
As always, validate configuration updates in development or staging environments. Automatic Discovery is dynamic, and testing first ensures you won’t unintentionally disrupt monitoring in production.
With Automatic Discovery configured, you now have automatic monitoring for your infrastructure services. Next, in Part 3 of this series, we’ll explore practical applications like automatically discovering and monitoring more of your applications, including databases, web services, and messaging systems. We’ll also get into advanced patterns for complex multi-services applications.
Missed the beginning of this series? Check out Automatic Discovery Part 1: What is Automatic Discovery in Splunk Observability Cloud and Why it Mat.... Ready to get started with Automatic Discovery? Set it up with Splunk Observability Cloud’s free for 14 day trial.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.