We had an issue where for a 3 week time our Splunk Cloud Cluster was out of sync. During that time knowledge objects worked intermittently. We didn't know at the time that this also meant our field extractions were also intermittently working so during that 3 week period our alerts and searches gave faulty/inconsistent results.
Splunk told us that health checks for infrastructure, like cluster health, is the responsibility of the customer and not Splunk. While it is a managed service they are only responsible for upgrades and fixing the cluster if it is down. So what health checks are you using? I really want to focus on identifying issues that will causes faulty search results but anything we should be alerting on will help.
Currently we are now looking in the logs for issues like:
The captain does not share common baseline with * member(s) in the cluster
* is having problems pulling configurations from the search head cluster captain
Thank you!