We have an alert that sends emails when triggered, and up until a few weeks ago it successfully sent emails to a company email and a PagerDuty alert email. We noticed that we stop getting the emails for this particular alert, and I want to investigate why. The alert still triggers a few times a day, but the emails never come. I've tested that the email addresses still work (for example, sending an email to the given PagerDuty email successfully creates an alert).
All of the docs and info I find about debugging this issue want me to traverse some Splunk log (for example, in $SPLUNK_HOME/var/log/splunk/scheduler.log). However, we use Splunk Cloud, so I don't have access to that log file. How can I find out why emails are not being sent?
Have a similar issue using Insights for Infrastructure. SMTP settings correctly set up. Alerts set up, but nothing showing up in python.log (empty 0kb) and no maillog.log file found.
Are there other tests I can run from within Insights For Infrastructure to ensure splunk can send mail out and then we can focus on alerting?
any help is appreciated
You can try to use the sendemail command from the search bar to see if you can force an email out.
To search on the internal logs, you can try accessing the _internal index.
Hey Rob, any updates on this? We still aren't receiving any emails from our Splunk Cloud instance, and we don't have a clue about what's wrong. All of the logs look fine and we can't see any errors with sending emails.
I greatly appreciate any assistance you guys can give us.
Most likely there is something wrong with the email server that its connected to. If there had been an error on the splunk side, you would have seen it with the searches that you have run and it would have shown in a log. If the logs are reporting that everything was sent, what that really means is that it was sent to the mail server to process and actually send out. What do you have set up for a mail server in your Splunk settings and can you verify the credentials?
Alternatively, you might also want to check if there is a security control in place (or AV?) that might have closed the connection to your mail server.
We are using Splunk Cloud; I was under the impression that the email server was builtin to the instance. I checked the mail server settings. It says the host is 127.0.0.1:25, there is email security, and the username/password are blank.
I wasn't the original person who set this up, but I asked that guy and he says he doesn't remember ever having to set up a mail server. Please correct me if I'm wrong, and we are indeed supposed to have our own mail server for Splunk Cloud.
Looks like I have have the exact same problem as you.
I've tried using the mail host 127.0.0.1:25 as well as another mail host belonging to the customer I'm currently working with. I've also tried sending emails from Splunk to several different email accounts, just to make sure there isn't some kind of filter blocking them out (I've encountered that problem at another customer).
I'm currently using a Splunk Cloud Trial instance. Don't know if that matter?
Please don't hesitate to follow up with the Cloud support on that. However, here is the link to the docs for setting up the email alerting in Splunk Cloud.
In python.log, I see entries saying that the emails are being sent for the alerts using
sendemail. I also see in scheduler.log that the alerts are being triggered.
I tried sending an email to both my company email and personal email using
index=_internal | head 1 | sendemail to="email@example.com" sendresults=true. I then checked the python.log and I saw that it tried to send the message. However, I never received the emails, and I don't see any errors.
I think that our Splunk Cloud instance isn't sending emails for some reason, and I'm not sure how to further debug the issue because I don't see any error messages. It looks like everything works in the logs, but then I don't receive the emails.