We have an All time (real time) alert which produced 315 alerts in the first eight hours of the day.
When running the search query of the alert for these eight hours, we get six events.
The alert itself is as simple as it gets -
index=<index name>
AND (category="Web Attack"
NOT src IN (<set of IPs>)
)
| table <set of fields>
What's going on here?
we perhaps need 1-2 more iterations, but I believe we are making progress 🙂
_index_earliest=-15m _index_latest=now index=your index | rest of the stuff...
Now, this should calculate only events that were indexed from 15 mins ago till now...bit closer?
Right.
I found out these six events with a similar query to yours.
The real thing for me at the moment is that -
Is there a way to schedule these "regular" alerts based on _indextime. Meaning, we'll have the alert fires for all events that got indexed in the past 15 minutes, for example.
another option you could try perhaps is to throttle the alerts, https://docs.splunk.com/Documentation/Splunk/7.3.1/Alert/Alertexamples
So if you throttle the alert for 1 hr from the UI does it reduce your alert received counts?