I have an alert which periodically searches for errors in my application's logs. For reasons I won't bore anybody with, emailing from Splunk is currently disabled in my organisation. I similarly am unlikely to be granted access to avail of the 'Run a Script' action.
This only leaves 'List in Triggered Alerts'. I wouldn't mind this so much, if the list didn't include alerts that reaped zero results. I find that I have to repeatedly, tediously click on individual rows of the listing to find out whether there are any results inside.
Any way to only have the trigger list contain rows that reaped more than zero results? Any and all advice, greatly appreciated,
All the Best,
p.s. am I dreaming, or did the sign-up process for this community force me to select "Yes, I would like to receive newsletters etc"? If you can't opt out, why even make it a radio button? Never seen a registration form take that approach in my life...
When you edit the alert's trigger, just set it to trigger based on the number of rows being greater than zero.
Open Splunk -> Search app -> Alerts -> your alert's row -> Edit -> Edit Alert Type and Trigger -> set Trigger Condition to "Number of Results" -> set condition below that to "is greater than" and "0".
Edit: Sample alert, run every minute, shows in triggered alerts if the corresponding input is turned on, does not show if the input is turned off. Alert was on all the time, no row was deleted from triggered alerts.
I have the same issue and it does not seem like intuitive behavior. Triggered alerts should only list when the alert trigger condition was met and not just when the search was run.
I initially created a search with a transaction and a subsearch to catch slow transactions. When I enabled it as a realtime alert, the triggered alerts listing quickly filled up. I really thought I had done something wrong but no emails were sent and clicking on the results links showed no actual results. Running Splunk 6.0 - Build 182037
...there we go. You can see the three minutes' gap where I turned the input off while leaving the alert enabled. As a result it found zero events, and did not get listed in triggered alerts.
Something else must be going on then, because not finding a result when that condition is set does work for me in not listing it in the triggered alerts. I'll try to add a screenshot or two to my answer to illustrate, hold on...
Thanks for the reply Martin. I actually do have the condition set to > 0. Unfortunately, the Trigger History lists all times that the alert was triggered, regardless of whether or not any results were found. You need to click on individual rows to find out how many results lie within, which is very inefficient.
In my opinion, the ability to filter 'zero result' rows out of the list would be great; or even a visual cue on each row to tip users off about how many results lie within.
like i said earlier you could do a stats command on the search and see for all the hours of results. There are no option to get the tracked alert results to filter through.
parent search|bucket _time span=1h|stats count by _time|where count>0
will give you the hours where you need to see through.
Thanks for the reply. To elaborate slightly. my alert searches the logs once per-hour for errors. Most hours, it finds none. But if I login to Splunk and click on the alert, I am presented with Trigger History Listing, each row of which offers no clue as to whether it pertains to an hour when errors were found, or not.
So I have to trawl the rows, clicking and expanding them in search of hours when errors did occur. This is tedious, because I don't care about hours during which no errors occurred. As explained, having the alert send emails isn't currently an option. Any other possibilities?
not sure what exactly you want to do! the alerting itself is configured because there shouldn't be a case where it is crossing the threshold limit. If you want to see whether the search has more than one record i.e the count. You could run stats command to get it directly.