I've set up an alert based on a search that I know returns results. However, the alerts aren't firing.
Here is the search string:
sourcetype=log4j API_Response_Duration>5000 |stats count(API_Response_Duration) as "API Resp > 5 Seconds" by API_Method
The job runs every hour at 15 minutes past the hour.
I set the alert to fire when Number of Results > 1
Have you looked in the scheduler.log to ensure the search is actually running? It should also tell you if result were found.
index=internal source=*scheduler.log searchname
Then if that is the case, have you looked in python.log to see if the alert is actually being sent, and if so is it possibly stuck on a mail server somewhere and/or in a spam box?
Are you sending email from Splunk to "localhost" mta or are you sending it to a remote MTA. If local, look in the mail logs and see if the email is queued, if remote make sure they accepted it and routed it.
Look at your alert_actions.conf to determine what your MTA is.
No, the email address should be just be
The email addresses weren't entered that way and they show up correctly in through the front end alert properties. Is this an indication that something is wrong or configured incorrectly?
What version are you running? I will create one and see. My question still is, in your conf file what is the mail server set to? Please let me know the answer to both.
entry from today's scheduler.log.. Seems to be running but still email.
07-15-2016 16:16:07.519 +0000 INFO SavedSplunker - savedsearch_id="nobody;search;API_Response > 5 Seconds", user="admin", app="search", savedsearch_name="API_Response > 5 Seconds", status=success, digest_mode=1, scheduled_time=1468599300, window_time=0, dispatch_time=1468599301, run_time=65.782, result_count=7, alert_actions="email", sid="scheduler__admin__search__RMD58218cf9de9be1944_at_1468599300_39552", suppressed=0, thread_id="AlertNotifierWorker-0"
Okay, I will try and repro the "u" part and see if that is normal.
What I want you to do however is assuming this is linux, and your mail is set to go to localhost, this means you are running Postfix or Sendmail as an MTA on your box.
This means that you should have a something like /var/log/maillog.
You should see it accept an email from splunk and then do something with it. Do you see log entries of the sort. Perhaps the mail is queuing on your localbox, or you are getting a relaying denied to upstream host or your DNS is not working so it can't work out who to send it to OR OR OR.
If we know splunk is sending the email to the local instance successfully we can move the troubleshooting to you MTA rather than looking at splunk.