Let's say I create an alert for when the count of field_A is greater than 10 for any one user_id. The alert looks back 7 days to see if count(field_A) > 10 for one user_id and the alert checks daily.
Currently, if user_id 12345 has a count(field_A) > 10 on Monday, my alert will continue to trigger on user_id 12345 until the following Monday, creating redundant alerts. I have to look back a week and I have to check daily.
How can I prevent user_id 12345 from triggering the alert 7 days in a row, assuming that user's activity has stopped?
It seems like I could log the alert results somehow and check to see if user_id 12345 is in the past week's alerts, but I'm not sure how to make this happen. I have created an alert event to both email and log the alerts but I am only getting the emails. My log event is set up with default values except I set sourcetype to "splunk_alerts". I left index as the default, I've tried leaving Host blank or specifying "localhost"....I don't know what else to try and I'm not sure if this is the right rabbit hole to continue down since I don't know if the alert results with user_ids even get logged, or if it's just a log event saying that there was an alert.
The only alternative that comes to mind is having the alert create a csv and then creating some job to import that csv to splunk and comparing to those logs for future alerts. This seems like a hassle though.
I am a Power user btw.
Have you tried to setup throttling on the user_id field? --> https://docs.splunk.com/Documentation/Splunk/latest/Alert/ThrottleAlerts
Alternatively you can use a CSV to save the alerted user as an intermediary.
You can use that within the search that generates your alert even!
<your search>
| inputlookup alert_lookup_example.csv append=true
| eval now=now(), alerted_time = coalesce(alerted_time, now)
| stats min(alerted_time) as alerted_time, max(now) as now by user_id
| eval throwout_threshhold = now - (3600 * 24 * 14)
| where alerted_time > throwout_threshhold
| outputlookup alert_lookup_example.csv
| where alerted_time = now
This will auto generate and fill the lookup csv and never alert for the same user twice.
The throwout_threshold defines how long a user_id would be sitting within the csv before it gets thrown out.
Have you tried to setup throttling on the user_id field? --> https://docs.splunk.com/Documentation/Splunk/latest/Alert/ThrottleAlerts
Alternatively you can use a CSV to save the alerted user as an intermediary.
You can use that within the search that generates your alert even!
<your search>
| inputlookup alert_lookup_example.csv append=true
| eval now=now(), alerted_time = coalesce(alerted_time, now)
| stats min(alerted_time) as alerted_time, max(now) as now by user_id
| eval throwout_threshhold = now - (3600 * 24 * 14)
| where alerted_time > throwout_threshhold
| outputlookup alert_lookup_example.csv
| where alerted_time = now
This will auto generate and fill the lookup csv and never alert for the same user twice.
The throwout_threshold defines how long a user_id would be sitting within the csv before it gets thrown out.
Two solutions for the price of one. Thanks!! Throttling did the job.