Alerting

How can I alert only if there are new events / new user_ids triggering my alert?

vmvd
Explorer

Let's say I create an alert for when the count of field_A is greater than 10 for any one user_id. The alert looks back 7 days to see if count(field_A) > 10 for one user_id and the alert checks daily. 

Currently, if user_id 12345 has a count(field_A) > 10 on Monday, my alert will continue to trigger on user_id 12345 until the following Monday, creating redundant alerts. I have to look back a week and I have to check daily. 

How can I prevent user_id 12345 from triggering the alert 7 days in a row, assuming that user's activity has stopped?

It seems like I could log the alert results somehow and check to see if user_id 12345 is in the past week's alerts, but I'm not sure how to make this happen. I have created an alert event to both email and log the alerts but I am only getting the emails. My log event is set up with default values except I set sourcetype to "splunk_alerts". I left index as the default, I've tried leaving Host blank or specifying "localhost"....I don't know what else to try and I'm not sure if this is the right rabbit hole to continue down since I don't know if the alert results with user_ids even get logged, or if it's just a log event saying that there was an alert. 

The only alternative that comes to mind is having the alert create a csv and then creating some job to import that csv to splunk and comparing to those logs for future alerts. This seems like a hassle though. 

I am a Power user btw. 

Labels (1)
0 Karma
1 Solution

peter_krammer
Communicator

Have you tried to setup throttling on the user_id field? --> https://docs.splunk.com/Documentation/Splunk/latest/Alert/ThrottleAlerts

Alternatively you can use  a CSV to save the alerted user as an intermediary. 
You can use that within the search that generates your alert even!

<your search> 
| inputlookup alert_lookup_example.csv append=true
| eval now=now(), alerted_time = coalesce(alerted_time, now)
| stats min(alerted_time) as alerted_time, max(now) as now by user_id
| eval throwout_threshhold = now - (3600 * 24 * 14)
| where alerted_time > throwout_threshhold
| outputlookup alert_lookup_example.csv
| where alerted_time = now

This will auto generate and fill the lookup csv and never alert for the same user twice. 

The throwout_threshold defines how long a user_id would be sitting within the csv before it gets thrown out. 

View solution in original post

peter_krammer
Communicator

Have you tried to setup throttling on the user_id field? --> https://docs.splunk.com/Documentation/Splunk/latest/Alert/ThrottleAlerts

Alternatively you can use  a CSV to save the alerted user as an intermediary. 
You can use that within the search that generates your alert even!

<your search> 
| inputlookup alert_lookup_example.csv append=true
| eval now=now(), alerted_time = coalesce(alerted_time, now)
| stats min(alerted_time) as alerted_time, max(now) as now by user_id
| eval throwout_threshhold = now - (3600 * 24 * 14)
| where alerted_time > throwout_threshhold
| outputlookup alert_lookup_example.csv
| where alerted_time = now

This will auto generate and fill the lookup csv and never alert for the same user twice. 

The throwout_threshold defines how long a user_id would be sitting within the csv before it gets thrown out. 

vmvd
Explorer

Two solutions for the price of one. Thanks!! Throttling did the job.

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Take Action Automatically on Splunk Alerts with Red Hat Ansible Automation Platform

 Are you ready to revolutionize your IT operations? As digital transformation accelerates, the demand for ...

Calling All Security Pros: Ready to Race Through Boston?

Hey Splunkers, .conf25 is heading to Boston and we’re kicking things off with something bold, competitive, and ...

Beyond Detection: How Splunk and Cisco Integrated Security Platforms Transform ...

Financial services organizations face an impossible equation: maintain 99.9% uptime for mission-critical ...