Monitoring Splunk

Do we have splunk alert if ingest across indexes/sourcetypes drops by certain % ?

Vinay
New Member

We have around 100 indexes and instead of creating alert for each index/sourcetype if there is drop in % of volume. I don't want to create alert for each index. Any suggestions or do we have existing query which is already implemented?

Labels (1)
0 Karma

sloshburch
Splunk Employee
Splunk Employee

While I haven't used it myself, a few people have mentioned that TrackMe has a lot of this type of functionality.

Another person shared that there are a lot of options for finding hosts or sources that stop submitting events:

and also some helpful posts:

0 Karma

rcramer_splunk
Splunk Employee
Splunk Employee

TekStream (partner) owns/maintains the RAISE Situation app, which may meet your use case.  Happy to introduce you to the partner if there's interest:  https://splunkbase.splunk.com/app/5324

 

The RAISE Situation app is used by Splunk administrators to monitor data sources coming into their Splunk environment. This app allows admins to easily customize alert thresholds for any data sources that stop sending logs into Splunk or have a large statistical change in the amount of logs sent into Splunk. With the RAISE Situation app, Splunk admins have full visibility into the efficacy of their overall data collection in Splunk.

 

  

JRW
Splunk Employee
Splunk Employee
As with Splunk, a million ways to do it so something basic like the following:
 
index=<your_index> sourcetype=<your_sourcetype> source=<your_datasource> earliest=-15m (or another time range)
| stats sum(<your_volume_field> such as bytes_sent) as data_volume
| eval threshold=1000000  # Set desired threshold for data volume
| eval is_reduced_volume = if(data_volume < threshold, "Yes", "No")
Can get a little more deeper with ML:

index=<your_index> sourcetype=<your_sourcetype> source=<your_datasource> earliest=-15m
| timechart span=1m sum(<your_volume_field>) as data_volume
| fit MLTK_AnomalyScore(data_volume) as anomaly_score
| eval threshold=0.9  # Set desired anomaly score threshold
| eval is_anomaly = if(anomaly_score > threshold, "Yes", "No")
0 Karma
Get Updates on the Splunk Community!

Join Us for Splunk University and Get Your Bootcamp Game On!

If you know, you know! Splunk University is the vibe this summer so register today for bootcamps galore ...

.conf24 | Learning Tracks for Security, Observability, Platform, and Developers!

.conf24 is taking place at The Venetian in Las Vegas from June 11 - 14. Continue reading to learn about the ...

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...