Alerting

How do you alert if a certain number of consecutive events exceeds a threshold?

dmoulais
New Member

I see lots of variants of this question, but I have yet to encounter this specific case ...

I have thousands of incoming events over time ... e.g.

           disk     mem
eventX      10        80
eventX      10        80
eventX      10        80
eventX      10        80
eventX      10        20
eventX      10        20
eventX      10        20
eventX      10        20
eventX      10        20
eventX      10        20
eventX      10        20
eventX      10        20
eventX      10        20
eventX      10        20
eventX      10        80

I want to alert ONLY if 10 consecutive events have a value that falls below the threshold ... consecutive being the key word there. For example, the data above would alert since 10 consecutive events have a mem value <= 20. I'm hoping this is enough detail to get my intent across.

Tags (2)
0 Karma
1 Solution

adonio
Ultra Champion

hello there,

run this search anywhere.
we will use streamstats

| makeresults count=1
| eval data = "eventX 10 80;eventX 10 80;eventX 10 80;eventX 10 80;eventX 10 20;eventX 10 20;eventX 10 20;eventX 10 20;eventX 10 20;eventX 10 20;eventX 10 20;eventX 10 20;eventX 10 20;eventX 10 20;eventX 10 80"
| makemv delim=";" data
| mvexpand data
| rex field=data "(?<a_field>[^\s]+)\s(?<metric_a>\d+)\s(?<metric_b>\d+)"
| table data metric_*
| rename COMMENT as "the above generates data below is the solution" 
| rename COMMENT as "here we use streamstats and capture the minimum value of each 10 events, so if you have 100 events, it looks at events 1-10, 2-11,3-12 ..."
| rename COMMENT as "we are leveraging the max function to find the maximum of a group of 10, if its 20 or less, find that event"
| streamstats window=10 current=t max(metric_b) as max_value
| search max_value<=20

hope it helps

View solution in original post

0 Karma

adonio
Ultra Champion

hello there,

run this search anywhere.
we will use streamstats

| makeresults count=1
| eval data = "eventX 10 80;eventX 10 80;eventX 10 80;eventX 10 80;eventX 10 20;eventX 10 20;eventX 10 20;eventX 10 20;eventX 10 20;eventX 10 20;eventX 10 20;eventX 10 20;eventX 10 20;eventX 10 20;eventX 10 80"
| makemv delim=";" data
| mvexpand data
| rex field=data "(?<a_field>[^\s]+)\s(?<metric_a>\d+)\s(?<metric_b>\d+)"
| table data metric_*
| rename COMMENT as "the above generates data below is the solution" 
| rename COMMENT as "here we use streamstats and capture the minimum value of each 10 events, so if you have 100 events, it looks at events 1-10, 2-11,3-12 ..."
| rename COMMENT as "we are leveraging the max function to find the maximum of a group of 10, if its 20 or less, find that event"
| streamstats window=10 current=t max(metric_b) as max_value
| search max_value<=20

hope it helps

0 Karma

manish_singh_77
Builder

Hi @adonio

I need to set up an alert when I see consecutive value as "FAILURE" in jobs_results field, can you help?

If consecutive 4 jobs are failing then I should be alerted.

For example:

job_result
success
failure
success
failure
failure
failure
failure

0 Karma

adonio
Ultra Champion

try this anywhere, and run the search couple of times and see how it plays out:

| gentimes start=-1 increment=1m
| head 20
| eval _time = starttime
| table _time
| eval value=random()%3
| eval job_result = if(value="0","success","failure")
| sort - _time
| rename COMMENT as "the above generates data below is the solution"
| streamstats current=t count as consecutive_count reset_after="("job_result==\"success\"")" by job_result
| eval alert = if(consecutive_count>=4,"ALERT",null())
0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...