Alerting

how to trigger alert if exceeds more than 100 times of 403 status code in a second

dhavamanis
Builder

Can you please tell me, how to trigger alert if exceeds more than 100 times of 403 status code in a second and real time (threshold).

Tags (2)
0 Karma
1 Solution

yannK
Splunk Employee
Splunk Employee

Realtime is not the best for such measures, but if you are ready to pay the price, and have false positives.

  • you can filter the events with the 403 status code
  • the _time in splunk is already in second (epoch time), therefore you can count the number of events per second
  • add a condition for the count > 100

<mywonderfullsearch> status_code=403 | stats count by _time | where count > 100 | convert ctime(_time) AS time

  • test
  • then pick a realtime window that is not too large let's say last 10 minutes
  • then schedule it
  • and add an alert trigger "number of results > 0"
  • setup the email for the alert
  • if needed check the "inline results" to add the details.

Remarks :

  • uncheck any alert retentions, if you have too many alert (let's say one per second....) , the preserved search result will fill your dispatch folder and impact your server
    see alert.suppress and alert.suppress.period in http://docs.splunk.com/Documentation/Splunk/6.1.1/Admin/Savedsearchesconf

  • add details to the search results
    As an improvement I would replace stats count by _time by stats count values(host) by _time
    to add the list of all the concerned hosts in the alert.

  • skip from realtime to historical search
    To avoid false positive I still recommend to run the search as a historical search. by example every 5 minutes, over the earliest=-7m@m latest -2m@m (to add a 2 minute delay to account for the possible indexing delay)

View solution in original post

yannK
Splunk Employee
Splunk Employee

Realtime is not the best for such measures, but if you are ready to pay the price, and have false positives.

  • you can filter the events with the 403 status code
  • the _time in splunk is already in second (epoch time), therefore you can count the number of events per second
  • add a condition for the count > 100

<mywonderfullsearch> status_code=403 | stats count by _time | where count > 100 | convert ctime(_time) AS time

  • test
  • then pick a realtime window that is not too large let's say last 10 minutes
  • then schedule it
  • and add an alert trigger "number of results > 0"
  • setup the email for the alert
  • if needed check the "inline results" to add the details.

Remarks :

  • uncheck any alert retentions, if you have too many alert (let's say one per second....) , the preserved search result will fill your dispatch folder and impact your server
    see alert.suppress and alert.suppress.period in http://docs.splunk.com/Documentation/Splunk/6.1.1/Admin/Savedsearchesconf

  • add details to the search results
    As an improvement I would replace stats count by _time by stats count values(host) by _time
    to add the list of all the concerned hosts in the alert.

  • skip from realtime to historical search
    To avoid false positive I still recommend to run the search as a historical search. by example every 5 minutes, over the earliest=-7m@m latest -2m@m (to add a 2 minute delay to account for the possible indexing delay)

dhavamanis
Builder

I have updated the below search query for this alert, Please correct me if anything wrong.

sourcetype=acquiasyslog and status=403 | stats count by _time , uri_path| where count>10

0 Karma

dhavamanis
Builder

Thank you so much for the details, Can you please help me on this, We need to setup alert for the condition, "sourcetype=acquiasyslog and status=403 | stats count by uripath". to trigger alert uripath count of each results exceed more than 10 times in a second.

0 Karma
Get Updates on the Splunk Community!

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...