Monitoring Splunk

What Infrastructure Health Checks do you use for your Managed Splunk Cloud

Melissa_T
Engager

We had an issue where for a 3 week time our Splunk Cloud Cluster was out of sync.  During that time knowledge objects worked intermittently.  We didn't know at the time that this also meant our field extractions were also intermittently working so during  that 3 week period our alerts and searches gave faulty/inconsistent results.

Splunk told us that health checks for infrastructure, like cluster health, is the responsibility of the customer and not Splunk.  While it is a managed service they are only responsible for upgrades and fixing the cluster if it is down. So what health checks are you using?  I really want to focus on identifying issues that will causes faulty search results but anything we should be alerting on will help. 

Currently we are now looking in the logs for issues like:

The captain does not share common baseline with * member(s) in the cluster
* is having problems pulling configurations from the search head cluster captain

 

Thank you!

0 Karma
Get Updates on the Splunk Community!

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

(view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...

Adoption of Infrastructure Monitoring at Splunk

  Splunk's Growth Engineering team showcases one of their first Splunk product adoption-Splunk Infrastructure ...