Monitoring Splunk

What Infrastructure Health Checks do you use for your Managed Splunk Cloud

Melissa_T
Engager

We had an issue where for a 3 week time our Splunk Cloud Cluster was out of sync.  During that time knowledge objects worked intermittently.  We didn't know at the time that this also meant our field extractions were also intermittently working so during  that 3 week period our alerts and searches gave faulty/inconsistent results.

Splunk told us that health checks for infrastructure, like cluster health, is the responsibility of the customer and not Splunk.  While it is a managed service they are only responsible for upgrades and fixing the cluster if it is down. So what health checks are you using?  I really want to focus on identifying issues that will causes faulty search results but anything we should be alerting on will help. 

Currently we are now looking in the logs for issues like:

The captain does not share common baseline with * member(s) in the cluster
* is having problems pulling configurations from the search head cluster captain

 

Thank you!

0 Karma
Get Updates on the Splunk Community!

New This Month in Splunk Observability Cloud - Metrics Usage Analytics, Enhanced K8s ...

The latest enhancements across the Splunk Observability portfolio deliver greater flexibility, better data and ...

Alerting Best Practices: How to Create Good Detectors

At their best, detectors and the alerts they trigger notify teams when applications aren’t performing as ...

Discover Powerful New Features in Splunk Cloud Platform: Enhanced Analytics, ...

Hey Splunky people! We are excited to share the latest updates in Splunk Cloud Platform 9.3.2408. In this ...