Monitoring Splunk

Do we have splunk alert if ingest across indexes/sourcetypes drops by certain % ?

Vinay
New Member

We have around 100 indexes and instead of creating alert for each index/sourcetype if there is drop in % of volume. I don't want to create alert for each index. Any suggestions or do we have existing query which is already implemented?

Labels (1)
0 Karma

sloshburch
Splunk Employee
Splunk Employee

While I haven't used it myself, a few people have mentioned that TrackMe has a lot of this type of functionality.

Another person shared that there are a lot of options for finding hosts or sources that stop submitting events:

and also some helpful posts:

0 Karma

rcramer_splunk
Splunk Employee
Splunk Employee

TekStream (partner) owns/maintains the RAISE Situation app, which may meet your use case.  Happy to introduce you to the partner if there's interest:  https://splunkbase.splunk.com/app/5324

 

The RAISE Situation app is used by Splunk administrators to monitor data sources coming into their Splunk environment. This app allows admins to easily customize alert thresholds for any data sources that stop sending logs into Splunk or have a large statistical change in the amount of logs sent into Splunk. With the RAISE Situation app, Splunk admins have full visibility into the efficacy of their overall data collection in Splunk.

 

  

JRW
Splunk Employee
Splunk Employee
As with Splunk, a million ways to do it so something basic like the following:
 
index=<your_index> sourcetype=<your_sourcetype> source=<your_datasource> earliest=-15m (or another time range)
| stats sum(<your_volume_field> such as bytes_sent) as data_volume
| eval threshold=1000000  # Set desired threshold for data volume
| eval is_reduced_volume = if(data_volume < threshold, "Yes", "No")
Can get a little more deeper with ML:

index=<your_index> sourcetype=<your_sourcetype> source=<your_datasource> earliest=-15m
| timechart span=1m sum(<your_volume_field>) as data_volume
| fit MLTK_AnomalyScore(data_volume) as anomaly_score
| eval threshold=0.9  # Set desired anomaly score threshold
| eval is_anomaly = if(anomaly_score > threshold, "Yes", "No")
0 Karma
Get Updates on the Splunk Community!

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...