We have around 100 indexes and instead of creating alert for each index/sourcetype if there is drop in % of volume. I don't want to create alert for each index. Any suggestions or do we have existing query which is already implemented?
While I haven't used it myself, a few people have mentioned that TrackMe has a lot of this type of functionality.
Another person shared that there are a lot of options for finding hosts or sources that stop submitting events:
and also some helpful posts:
TekStream (partner) owns/maintains the RAISE Situation app, which may meet your use case. Happy to introduce you to the partner if there's interest: https://splunkbase.splunk.com/app/5324
The RAISE Situation app is used by Splunk administrators to monitor data sources coming into their Splunk environment. This app allows admins to easily customize alert thresholds for any data sources that stop sending logs into Splunk or have a large statistical change in the amount of logs sent into Splunk. With the RAISE Situation app, Splunk admins have full visibility into the efficacy of their overall data collection in Splunk.
index=<your_index> sourcetype=<your_sourcetype> source=<your_datasource> earliest=-15m (or another time range) | stats sum(<your_volume_field> such as bytes_sent) as data_volume | eval threshold=1000000 # Set desired threshold for data volume | eval is_reduced_volume = if(data_volume < threshold, "Yes", "No")
index=<your_index> sourcetype=<your_sourcetype> source=<your_datasource> earliest=-15m | timechart span=1m sum(<your_volume_field>) as data_volume | fit MLTK_AnomalyScore(data_volume) as anomaly_score | eval threshold=0.9 # Set desired anomaly score threshold | eval is_anomaly = if(anomaly_score > threshold, "Yes", "No")