Alerting

How can I trigger an alert if traffic drops 50% when compared to the same time of previous weeks?

mleikin
Engager

I want to run an alert every hour that looks at the number of events within that hour and compare that number to the number of events over the same hour but in the previous 4 works. IF the current number is 50% of the 4 week average then an alert should be sent out.

Tags (2)
1 Solution

dwaddle
SplunkTrust
SplunkTrust

You should read up on summary indexing to begin with. Start at http://www.splunk.com/base/Documentation/latest/Knowledge/Usesummaryindexing .

The general idea would be to define a summary index that is updated hourly with the past hour's event data. Once you have this data within a summary index, then you can more effectively search and alert against the summary data meeting your thresholds.

UPDATE

The point in summary indexing this is to effectively collect information about your data. Supposing you perform a scheduled search similar to this once per hour, and summarize it:

sourcetype=foo | bucket span=1h _time | stats count(_raw) as ec by _time

Your summary data at that point should ideally contain 1 "row" per hour. After a week's worth of data population, then for any given hour, the "same hour" a week ago is 168 "rows" away. The "delta" search command can be used with "p=168" to compare to the same hour last week.

Strictly, you can do this without the summary index, similar to:

sourcetype=foo | bucket span=1h _time | stats count(_raw) as ec by _time | delta ec p=168

However, performance of this will not be optimal.

View solution in original post

dwaddle
SplunkTrust
SplunkTrust

You should read up on summary indexing to begin with. Start at http://www.splunk.com/base/Documentation/latest/Knowledge/Usesummaryindexing .

The general idea would be to define a summary index that is updated hourly with the past hour's event data. Once you have this data within a summary index, then you can more effectively search and alert against the summary data meeting your thresholds.

UPDATE

The point in summary indexing this is to effectively collect information about your data. Supposing you perform a scheduled search similar to this once per hour, and summarize it:

sourcetype=foo | bucket span=1h _time | stats count(_raw) as ec by _time

Your summary data at that point should ideally contain 1 "row" per hour. After a week's worth of data population, then for any given hour, the "same hour" a week ago is 168 "rows" away. The "delta" search command can be used with "p=168" to compare to the same hour last week.

Strictly, you can do this without the summary index, similar to:

sourcetype=foo | bucket span=1h _time | stats count(_raw) as ec by _time | delta ec p=168

However, performance of this will not be optimal.

dwaddle
SplunkTrust
SplunkTrust

Please see if the updated information helps you any

0 Karma

mleikin
Engager

At this point I am not concerned with the amount of time that it takes to run the query. What I am trying to do is isolate one hour last week and compare it to one hour this week. Currently I have this query but it only compares one hour over the previous hour. I want to look at one hour over the same hour last week

... earliest=-7d@h| timechart span=1h count | delta count as difference | eval percdif=round(abs(difference/count)*100,0)

0 Karma
Get Updates on the Splunk Community!

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...