Hi everyone,
I have a suspicion that following this order of events, has caused an alert not to trigger when due:
1) I cloned the original alert for testing purposes
2) The 2 alerts find the same result and function simultaneously
3) I disabled the cloned alert
4) Original alert not triggering (no email being sent, no events being logged on our alert index...) when Splunk search is being fulfilled. I repeated the search with the Splunk logic and results come back. I have no other explanation than the mentioned above.
Has anyone seen this happen before?
Thank you in advance
Did you check the scheduler logs to see if the original alert search is firing and finding results (index=_internal sourcetype=scheduler savedsearch_name="yourOriginalAlertSearchNameHere")?
Hi @somesoni2
Thank you for responding! I've just tried your query and found out that 2 weeks ago, a few alerts were not triggered and were "skipped" due to "The maximum disk usage quota for this user has been reached." which makes sense. This is the reason of why I disabled my (testing) cloned alerts, to liberate some disk usage quota from my user (not the best way to proceed, I now know I just have to increment my user's quota). This was the first time I detected some alerts not triggering. However, the second instance when this happened, was today.
Today, there are no results coming out from the query you mentioned, and the only change I have done in the past 2 weeks ago, that could potentially affect the original alert, was as mentioned before, disabling the cloned alert. I have now deleted the cloned alert and I am hopefully waiting to see the original alert trigger.
Is there something more I can verify?
SC of the original alert being lastly skipped:
SC of the cloned alert being skipped:
BTW, I am using version 9.0.2208.3