Splunk Search

How to compare two counts from a single stats search, and alert if there is a large difference between the results?

Xarian
Explorer

I have searched a lot and haven't found a straight answer to this, yet.

I want to create an alert on spikes of load for two hosts. To do this, I am comparing minutes. Ignoring the current minute, as its data is incomplete, I am comparing the previous minute with the one before that. If there is a large spike in the two results, I want to trigger an alert. Currently, I am struggling comparing the two values as they are just in a table. Is there a better way to approach this? Thank you.

This is what I have so far:

index=web host=*EXP0* earliest=@m-2m latest=@m | bucket _time span=1m | stats count by _time
0 Karma
1 Solution

sundareshr
Legend

Try this

index=web host=*EXP0* earliest=@m-2m latest=@m | eval when=if(_time>relative_time(now(), "-1m@m"), "current", "previous") | eval dummy=" " | chart count over dummy by when | where current-previous>largenumber

View solution in original post

sundareshr
Legend

Try this

index=web host=*EXP0* earliest=@m-2m latest=@m | eval when=if(_time>relative_time(now(), "-1m@m"), "current", "previous") | eval dummy=" " | chart count over dummy by when | where current-previous>largenumber

Xarian
Explorer

Hi Sundareshr, thanks for the response, that looks great!

Both of our searches provide the same total count, however, the total results per minute are different.
My search resulted in (14482+15418=92200) taking ~5secs
Yours resulted in (15240+14660=92200) taking ~15secs

Could the difference be due to each hosts varying interpretation of the 'now()' function, if the host system clocks weren't identical? As I'm alerting on anomalies per minute, I want to be vigilant that the results are falling into the correct minute.
Thanks so much for the help!

0 Karma

sundareshr
Legend

You could also try this

index=web host=*EXP0* earliest=@m-2m latest=@m | bucket _time span=1m | stats count by _time | delta count | where isnotnull(count)
0 Karma

Xarian
Explorer

Thanks for your help, This worked perfectly. Have a great week.

0 Karma
Get Updates on the Splunk Community!

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...

Splunk Observability for AI

Don’t miss out on an exciting Tech Talk on Splunk Observability for AI!Discover how Splunk’s agentic AI ...

🔐 Trust at Every Hop: How mTLS in Splunk Enterprise 10.0 Makes Security Simpler

From Idea to Implementation: Why Splunk Built mTLS into Splunk Enterprise 10.0  mTLS wasn’t just a checkbox ...