I have seen this question a few times but have not seen a solution that works. I just had an issue where 1 of my 2 clustered indexers went down for whatever reason and I had to restart Splunk. The only way I saw this was a msg in the top menu that said there was an issue. How can I set up an alert that would be triggered when this happens?
thanks
There is a platform alert in the Monitoring Console called Abnormal state of indexer processor. See Platform alerts in the Monitoring Splunk Enterprise manual.
If you prefer one of the reports in the indexer clustering status dashboard, or in the master dashboard on your indexer cluster master, then you should be able to make a separate alert out of that search.
There is a platform alert in the Monitoring Console called Abnormal state of indexer processor. See Platform alerts in the Monitoring Splunk Enterprise manual.
If you prefer one of the reports in the indexer clustering status dashboard, or in the master dashboard on your indexer cluster master, then you should be able to make a separate alert out of that search.
AHA! That is what I was looking for. Thanks!
You can set following search as an alert search on the cluster master node.
| rest /services/cluster/master/peers | table label status | where status!="Up"
Possible status values:
Up
Down
Pending
Detention
Restarting
DecommAwaitPeer
DecommFixingBuckets
Decommissioned