Thanks for your answer. This will help, when we want to check the current status of throttled values. But what I forgot to mention is that I also want to see a historic status of no longer throttled values. My real use case was to understand why an alert fired for some, but not all expected results. And at a time where the throttling was already outdated. Your approach is very nice, but cumbersome to use in daily business, as there is no direct way to get information about the status of throttling. (Need of hashing values to look up in throttling CSV files) Nevertheless I learned some new stuff. Thank you for that. 🙂 As a possible workaround, for this information not to be directly accessible in Splunk, I got a hint to use the alert action Log Event to write the desired information into an index.
... View more
Hi, I had the situation that I wanted to know why an alert wasn't fired for a resource. Therefore I was looking which field values (don't know how to describe it better) are currently stored in Splunk for suppressing there alert action to be executed. To make it better understandable what I mean, here a short fictive example: Use Case: Monitoring of CPU usage of hosts. When the CPU usage hits the 80% threshold fire an alert and throttle alert for 1 hour, based on host field. Question: How can I determine which for which hosts the alert is throttled. Note: I'm interested in the throttling list the alert uses. Not in approaches that evaluate the CPU usage events. Thank you in advance. Jens
... View more