Alerting

Separate alert for every (and only) entry which count exceeds threshold

shorokhov
Engager

Hi,

have CLIENT_CONNECT_AUTH_FAIL log entries in Splunk for different usernames.

Would like to send an alert when the count of CLIENT_CONNECT_AUTH_FAIL entries for a specific username exceeds a threshold (say 10 within the last 5 min), an alert should be generated for every user that exceeded a threshold (1 alert per the corresponding username).

Trying to achieve that I've used `| stats count by username` and then put trigger `search count > 10`, but results are not as expected 😞

Consider an example. Stats query produces the following results:

username     count
user1              20
user2              15
user3              5

If I set `Trigger` = `Once` then I get an alert for only user1 despite that count of CLIENT_CONNECT_AUTH_FAIL for `user2` also exceeded threshold.
If I set `Trigger` = `For each result` then I get an alert for every username despite that threshold is not exceeded for `user3`.

What is the right way to do this in Splunk?

Labels (1)
0 Karma
1 Solution

shorokhov
Engager

Added `| where count > 10` to the query and set trigger on `Number of Results > 0` `For each result`
This did the magic (:

View solution in original post

0 Karma

shorokhov
Engager

Added `| where count > 10` to the query and set trigger on `Number of Results > 0` `For each result`
This did the magic (:

0 Karma
Get Updates on the Splunk Community!

Automatic Discovery Part 1: What is Automatic Discovery in Splunk Observability Cloud ...

If you’ve ever deployed a new database cluster, spun up a caching layer, or added a load balancer, you know it ...

Real-Time Fraud Detection: How Splunk Dashboards Protect Financial Institutions

Financial fraud isn't slowing down. If anything, it's getting more sophisticated. Account takeovers, credit ...

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...