We have a Business Requirement where we need to trigger an alert whenever a Queue Depth (no. of requests in a Queue) breaches the predefined thresholds. For eg., the threshold for Queue Q1 is 40, Q2 is 80, Q3 is 50. The no. of requests per Queue change every minute. The Alert search string should also run every minute, to check if any of the Queue has breached its threshold. The search string does run every minute, and lists the Alert in the Triggered Alerts section.
The problem is that Splunk triggers only one Alert per search. For eg. if within the last 1 munte, 4 Queues breached their threshold, it triggers only ONE Alert for all four of them, whereas we would want four separate Alerts to be raised.
One option could be to run four parallel searches, one for each Queue. But this would be very CPU extensive, especially if the number of Queues go on increasing. Is there any way we can configure per event Alerts instead of just once per search?
NO. You can configure your alert to run each time you have one result in your eventdata, means each time a single Queu breached his threshold.
Just use the OR in your search query, eg: ... (Q1>40 OR Q2>80 OR Q3>50) , and then, set a Real time Alert. In the Triggered Condition box, choose Per Result, which means when ever search returns a result. Each time each Queue will breache his threshold, the alert will be triggered.
It is true, you could have a case, where 4 Queues breached their threshold at the same time, but the percentage of such cases is very low.