Alerting

Alert not triggered for a brief amount of time.

pdantuuri0411
Explorer

Hi, We have an alert to detect OutOfMemort errors which runs every minute and checks for the last minute. Noticed that the alert was not triggered for around 20 minutes even though the search meets the criteria to trigger an alert.

Checked the schedular log and the alert is not skipped. Its strange to see result_count=0 as there was an error at 14:14:19.
Any reasons why this would happen or anything else I might want to check?

05-02-2020 14:15:04.388 -0500 INFO SavedSplunker - savedsearch_id="nobody;search;DOIT(1m)-JBOSS 7 OutOfMemory", search_type="scheduled", user="4120", app="search", savedsearch_name="DOIT(1m)-JBOSS 7 OutOfMemory", priority=default, status=success, digest_mode=1, scheduled_time=1588446900, window_time=0, dispatch_time=1588446904, run_time=0.352, result_count=0, alert_actions="", sid="scheduler_412037search_RMD5f897e3bb04038a2c_at_1588446900_8314", suppressed=0, thread_id="AlertNotifierWorker-0", workload_pool=""

Regards.

Labels (1)
0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

May 2026 Splunk Expert Sessions: Security & Observability

Level Up Your Operations: May 2026 Splunk Expert Sessions Whether you are refining your security posture or ...

Network to App: Observability Unlocked [May & June Series]

In today’s digital landscape, your environment is no longer confined to the data center. It spans complex ...

SPL2 Deep Dives, AppDynamics Integrations, SAML Made Simple and Much More on Splunk ...

Splunk Lantern is Splunk’s customer success center that provides practical guidance from Splunk experts on key ...