Alerting

Why is my simple alert not firing?

nfspeedypur
New Member

I have a simple scheduled search that is running every 5 minute. The search runs fine and I can see there are results, normally between 10-20 results. The alert trigger is set to 'Trigger Condition: Number of Results is > 1' However I never get an alert trigger to occur.

I have checked the Splunk logs and the scheduler has 100% successful runs. I am not sure what could be the reason for this not sending an alert.

Trial expires in May.

-David

0 Karma
1 Solution

lguinn2
Legend

What is the lag time between when an event is created and when it is indexed?

Consider this example: On a production server, an application writes to abc.log at 8:59:59. The Splunk forwarder sees that abc.log has been modified and collects the data, sending it to the indexer. The data is parsed and written to the main index at 9:00:03 - a 3 second delay (which is pretty quick).

In the meantime, a search is running on the indexer every minute, searching the prior minute's data. Here is a table of the recent executions:

Search runs at:   Start Time:   End Time:
8:59:00             8:58:00      8:59:00
9:00:00             8:59:00      9:00:00
9:01:00             9:00:00      9:01:00

When the search runs at 8:59:00, the event has not yet happened.
When the search runs at 9:00:00, the event from abc.log has not yet been indexed, so it does not appear in the results.
When the search runs at 9:01:00, the event from abc.log exists in the main index, but its timestamp is 8:59:59 - so it is outside the time range of the search! The event will not be part of any search results, so the alert will not be triggered.

While I still think that something else may be going wrong with your searches, you will alway risk "missing" events when you do not consider the lag time between when an event occurs on a machine and when the information is indexed. You have 2 choices:

1 - Run a realtime search. This can be quite expensive, but you will not miss events.

2 - Run a scheduled search, but include a lag time. To include a 1-minute lag, your search could be
Your search time range: earliest=-2m@m latest=-1m@m (starting 2 minutes ago, and ending 1 minute ago)
Cron schedule: */1 * * * * (run every minute)
This works if there is less than a 1 minute delay between when the events occur and when they are indexed.

Finally, you might want to look at the Splunk Monitoring Console. There is a dashboard for examining scheduled search activity. You may find that not all of your scheduled searches are being run, or that there are other problems in the environment.

View solution in original post

templets
Path Finder

It may just be you need an explicit "field" statement at the end.

See:

https://answers.splunk.com/answers/686813/why-is-my-alert-not-triggering.html?childToView=752021#ans...

0 Karma

lguinn2
Legend

What is the lag time between when an event is created and when it is indexed?

Consider this example: On a production server, an application writes to abc.log at 8:59:59. The Splunk forwarder sees that abc.log has been modified and collects the data, sending it to the indexer. The data is parsed and written to the main index at 9:00:03 - a 3 second delay (which is pretty quick).

In the meantime, a search is running on the indexer every minute, searching the prior minute's data. Here is a table of the recent executions:

Search runs at:   Start Time:   End Time:
8:59:00             8:58:00      8:59:00
9:00:00             8:59:00      9:00:00
9:01:00             9:00:00      9:01:00

When the search runs at 8:59:00, the event has not yet happened.
When the search runs at 9:00:00, the event from abc.log has not yet been indexed, so it does not appear in the results.
When the search runs at 9:01:00, the event from abc.log exists in the main index, but its timestamp is 8:59:59 - so it is outside the time range of the search! The event will not be part of any search results, so the alert will not be triggered.

While I still think that something else may be going wrong with your searches, you will alway risk "missing" events when you do not consider the lag time between when an event occurs on a machine and when the information is indexed. You have 2 choices:

1 - Run a realtime search. This can be quite expensive, but you will not miss events.

2 - Run a scheduled search, but include a lag time. To include a 1-minute lag, your search could be
Your search time range: earliest=-2m@m latest=-1m@m (starting 2 minutes ago, and ending 1 minute ago)
Cron schedule: */1 * * * * (run every minute)
This works if there is less than a 1 minute delay between when the events occur and when they are indexed.

Finally, you might want to look at the Splunk Monitoring Console. There is a dashboard for examining scheduled search activity. You may find that not all of your scheduled searches are being run, or that there are other problems in the environment.

View solution in original post

nfspeedypur
New Member

I believe you are on the right track, that things are not set up correctly. I added an additional, none email alert, and that is working. I believe the issue now is related to the Email service, not Splunk. Thank you for the help in troubleshooting.

0 Karma

nfspeedypur
New Member

Right now my earliest - 1h. The run time of every minute was to make the system run more often just to get the test completed faster. I have changed this schedule to every 5 mins and every 15 mins but it is still not firing.

I will take a look at the Splunk Monitoring Console.

0 Karma

nfspeedypur
New Member

I have looked into the Splunk Monitoring Console and the Alerts section shows 0 alerts triggered.

I have the search running every 5 mins just incase there is an issue with the every minute search.

I still see 0 alerts.

0 Karma

woodcock
Esteemed Legend

When you look at the saved search and "recent runs", do they show any results?

0 Karma

nfspeedypur
New Member

I see a run every minute, for the past 2 hours, with over 20 items each.

0 Karma

nfspeedypur
New Member

alt text

alt text

These are screen caps of the recent runs and the settings

0 Karma

woodcock
Esteemed Legend

Why "greater than" 1? It should be 0.

0 Karma

somesoni2
Revered Legend

What is the alert search query?

0 Karma

nfspeedypur
New Member

source="Perfmon*" counter="% Free Space" Value<60

Condition - If Number of Events - is greater than - 1

0 Karma

woodcock
Esteemed Legend

How do you know it is not triggering? Is it here?

| rest/servicesNS/-/-/alerts/fired_alerts
| search NOT title="-"
0 Karma

nfspeedypur
New Member

That returns 0 results. I see recent activity showing the searches and results. Then I see 0 alerts in the 'searches, reports, and alerts' for the 1 search. In addition the alert was set to trigger an email, which it is not doing.

0 Karma
Take the 2021 Splunk Career Survey

Help us learn about how Splunk has
impacted your career by taking the 2021 Splunk Career Survey.

Earn $50 in Amazon cash!