I am configuring throttling for a Splunk alert. I have it set to generate an alert for each event, and am throttling for 8hrs on a particular field.
How sophisticated is Splunk's Alert Throttling?
For example:
An alert is generated for the field queuename when count > 0. Queue1's count rises to 1, and an alert is generated. Throttling says another alert for queuename Queue1 will not be generated for 8 hours.
However, what if Queue1's count drops to zero after 1 hour. An hour after that (2 hours total), it rises to 3000. Will an alert be generated? Or will the throttle applied earlier still apply?
In essence, I guess I'm asking if the throttle timer is reset if a field comes out of alarm.
Anyone !! is there a way to generate an alert during its throttle period if the transaction failure is above than the result of first alert generated????
TTBOMK, there is no concept on an 'alert reset condition', or 'a field coming out of alarm', so in your use case, the alert will not be generated for 8 hours after the last time it triggered, even if the alerting condition no longer exists.