Other Usage

Why is Splunk Alert not triggering for every result?

nytins
Engager

I have configure a splunk alert with alert condition to Trigger for each result. But every time I only get the alert for only one of those results. Any idea why?

Below is the screenshot of the alert:
Screenshot 2023-09-12 at 7.10.03 PM.png

And below is a sample result from the alert query

nytins_0-1694560421725.png

 

Labels (1)
Tags (2)
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @nytins,

at first, using yesterday as Time Range, if you schedule your alert at 10:00 and at 19:00 you have the same result in both the runs.

For the issue, what does it happen if you use "Once"?

Then are you shure that the Trigger action you configured can manage more than one result? I don't know PagerDuty.

Ciao.

Giuseppe

0 Karma

nytins
Engager

Both "Once" and "For each result" behaves the same way for me. In both cases, I got the alert with only one event from the results. I am assuming PagerDuty doesn't support multiple results.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @nytins,

as I said, I don't know PagerDuty and probably the issue is the it doesn't permits multiple values.

If you don't have many results, you could create a workaround like the following:

  • create a lookup (called e.g. PageDuty_temp.csv),
  • save your results in this lookup,
  • create a new alert that:
    • searches on this lookup,
    • takes only the first value,
    • send a message to PagerDuty,
    • removes the used value from the lookup.

Ciao.

Giuseppe

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

Have you already look from internal logs what has happened? There should be entries about fire of this alert.

r. Ismo

0 Karma

nytins
Engager

I don't have access to splunk servers, these are managed by a central team. Are these logs available to search within splunk? If yes, any how how can I search for it?

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Those are stored into _internal index. If you are not part of splunk admin team, you probably haven't access to it. You could try 

index=_internal

To see if you can see events in that index and if you can then you can try this

index="_internal" component=SavedSplunker sourcetype="scheduler" thread_id="AlertNotifier*" NOT (alert_actions="summary_index" OR alert_actions="") app!=splunk_instrumentation 
| fields _time app result_count status alert_actions user savedsearch_name splunk_server_group 
| stats earliest(_time) as _time count as run_cnt sum(result_count) as result_count values(alert_actions) as alert_actions values(splunk_server_group) as splunk_server_group by app, savedsearch_name user status 
| table _time, run_cnt, app, savedsearch_name user status result_count alert_actions splunk_server_group

 It shows alerts which has previously run and what has happen.

If you haven't access to internal logs, then you should ask from your Splunk admin team, that they will check what has happened.

 

0 Karma
Get Updates on the Splunk Community!

Index This | I’m short for "configuration file.” What am I?

May 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with a Special ...

New Articles from Academic Learning Partners, Help Expand Lantern’s Use Case Library, ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Your Guide to SPL2 at .conf24!

So, you’re headed to .conf24? You’re in for a good time. Las Vegas weather is just *chef’s kiss* beautiful in ...