I have problems understanding a situation. First, the problem manifested itself when a colleague approached me with the issue that his schedule real-time search is not sending emails when a certain event is happening in the log file. I couldn't really comprehend why, as the alert was created and is listed in Triggered Alerts. The condition was "Always" and the alert mode "Once per result", so I don't see a reason why the email isn't being sent.
I have verified that the search head is sending other alerts, so there is no issue in the connectivity to the smtp server.
Secondly, I tried cloning this search, changed it from real-time to "-1d to now". I'm not getting emails, but I am seing the alerts in the "Triggered Alerts". I don't really understand this combination of behaviour. Either it shouldn't be in "Triggered Alerts" and not send an email, or it should be listed AND it should send an email.
Or am I missing something?
Check to make sure your Splunk instances as well as the system that you are collecting logs from are synced to NTP.
Having system time off on any of these can absolutely screw up alerting.
That includes validating that the timezones are correct.
did you check for any errors in:
index=_internal ( sourcetype=scheduler alert_actions="email" ) OR ( sourcetype=splunk_python "sendemail" )
You did check the basic stuff, like sending email possible at all ;)? Maybe someone did change something outside your Splunk setup?
Again: yes. Another alert email is being sent and I have checked the connection to the mail server. I would also expect any problems with email sending to end up in the internal logs of Splunk.