I have configure a splunk alert with alert condition to Trigger for each result. But every time I only get the alert for only one of those results. Any idea why?
Below is the screenshot of the alert:
And below is a sample result from the alert query
Hi @nytins,
at first, using yesterday as Time Range, if you schedule your alert at 10:00 and at 19:00 you have the same result in both the runs.
For the issue, what does it happen if you use "Once"?
Then are you shure that the Trigger action you configured can manage more than one result? I don't know PagerDuty.
Ciao.
Giuseppe
Both "Once" and "For each result" behaves the same way for me. In both cases, I got the alert with only one event from the results. I am assuming PagerDuty doesn't support multiple results.
Hi @nytins,
as I said, I don't know PagerDuty and probably the issue is the it doesn't permits multiple values.
If you don't have many results, you could create a workaround like the following:
Ciao.
Giuseppe
Hi
Have you already look from internal logs what has happened? There should be entries about fire of this alert.
r. Ismo
I don't have access to splunk servers, these are managed by a central team. Are these logs available to search within splunk? If yes, any how how can I search for it?
Those are stored into _internal index. If you are not part of splunk admin team, you probably haven't access to it. You could try
index=_internal
To see if you can see events in that index and if you can then you can try this
index="_internal" component=SavedSplunker sourcetype="scheduler" thread_id="AlertNotifier*" NOT (alert_actions="summary_index" OR alert_actions="") app!=splunk_instrumentation
| fields _time app result_count status alert_actions user savedsearch_name splunk_server_group
| stats earliest(_time) as _time count as run_cnt sum(result_count) as result_count values(alert_actions) as alert_actions values(splunk_server_group) as splunk_server_group by app, savedsearch_name user status
| table _time, run_cnt, app, savedsearch_name user status result_count alert_actions splunk_server_group
It shows alerts which has previously run and what has happen.
If you haven't access to internal logs, then you should ask from your Splunk admin team, that they will check what has happened.