Monitoring Splunk

What Infrastructure Health Checks do you use for your Managed Splunk Cloud

Melissa_T
Engager

We had an issue where for a 3 week time our Splunk Cloud Cluster was out of sync.  During that time knowledge objects worked intermittently.  We didn't know at the time that this also meant our field extractions were also intermittently working so during  that 3 week period our alerts and searches gave faulty/inconsistent results.

Splunk told us that health checks for infrastructure, like cluster health, is the responsibility of the customer and not Splunk.  While it is a managed service they are only responsible for upgrades and fixing the cluster if it is down. So what health checks are you using?  I really want to focus on identifying issues that will causes faulty search results but anything we should be alerting on will help. 

Currently we are now looking in the logs for issues like:

The captain does not share common baseline with * member(s) in the cluster
* is having problems pulling configurations from the search head cluster captain

 

Thank you!

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...