Alerting

Splunk Alert not behaving as expected

brewster88
New Member

Morning Guys,

Hope everyone is well, I have setup a custom alert in Splunk that runs once an hour and looks at the past hour of activity.

index=index AND site=******** AND act=REQ_CHALLENGE_CAPTCHA AND action=blocked AND url=*******/account/login" AND UserAgent="*iPhone" | bucket span=1m _time | stats count(site) as requests by _time, site, Client_Type,src , UserAgent | where requests > 180

Where I have defined the 180 - is this requests per minute? or where 180 requests has been breached over the past hour of activity?

Guess I just need a little help in understanding exactly what I have setup here 🙂

Tom

Tags (1)
0 Karma

DavidHourani
Super Champion

Hi @brewster88,

This is the requests per minute greater than 180 because the bucket command will split your time into chunks of 1minute. When you aggregate it with stats it stays at 1 minute.

if you want to make it over the entire hour then you can run this search hourly:

index=index AND site= AND act=REQ_CHALLENGE_CAPTCHA AND action=blocked AND url=/account/login" AND UserAgent="iPhone"  | stats count(site) as requests by site, Client_Type,src , UserAgent | where requests > 180

Cheers,
David

0 Karma
Get Updates on the Splunk Community!

Data Management Digest – December 2025

Welcome to the December edition of Data Management Digest! As we continue our journey of data innovation, the ...

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...