We moved to a a couple of new indexers, and set up search head pooling. Now all the alerts we had working before do not send an email. The searches show up and you can run them just fine. Also, new searches/alerts that are configure are working just fine, it's only searches that were configured before we moved everything.
You should check your scheduler.log to see if the search fired an alert_action, and if it did, what python.log says happened with the specific alert action that was triggered. Both of those logs are in $SPLUNK_HOME/var/log/splunk/. If you see that python says it is sending the email and you aren't getting errors back in the logs, then the problem is likely with your mail server. At that point you should probably do a tcpdump to see what the conversation with the mail server tells you about the situation.
You should check your scheduler.log to see if the search fired an alert_action, and if it did, what python.log says happened with the specific alert action that was triggered. Both of those logs are in $SPLUNK_HOME/var/log/splunk/. If you see that python says it is sending the email and you aren't getting errors back in the logs, then the problem is likely with your mail server. At that point you should probably do a tcpdump to see what the conversation with the mail server tells you about the situation.
here is an example of running the tcpdump command looking for outbound emails on port:25/tcp
tcpdump -s0 -xX port 25
Check tcpdump, if Splunk got an error from your mailserver, it would be in the log. Since you didn't mention that, this is likely a problem with the mailserver.
I'm seeing jobs run in scheduler.log that have an alert_actions="email", and in python.log it says that an email was sent, but those emails are rarely actually going though. New alerts seem to work 100% of the time though.