We are unable to get the alert emails even when the events matching the alert condition is present in Splunk cloud.
Please help how we can resolve this?
Hi, we are also facing the same issue since this morning around 11 AM BST. No scheduled alert/report emails are not being sent. also tried the test email but it didn't work as well. thank you.
Search splunkd.log for "sendemail" to see if Splunk is reporting errors sending email. If not then your email provider may be discarding the messages as spam. Contact them.
tried sending an email using this search but no luck.
index=main | head 5 | sendemail to="firstname.lastname@example.org" server="localhost" subject="Test Mail" message="This is an example message" sendresults=true inline=true format=raw sendpdf=true
Also, when checking below search, it doesn't show any errors:
index="_internal" source="/opt/splunk/var/log/splunk/python.log" sendemail
INFO sendemail:184 - Sending email
INFO sendemail:1516 - Generated PDF for email
we have tried checking the emails for two different domains (where we have received emails until yesterday) and no issues with email blocking/black listing.
can you please suggest what else could be checked?
Something changed yesterday to prevent Splunk Cloud emails from being delivered. I suggest open a Support Request to have Splunk check things on their end and also working with your network team to verify Splunk Cloud email is allowed in.
thanks, @richgalloway - We have raised a ticket with support for this - We also checked with internal it team and no emails were blocked - also the emails were not received by other domain as well. these emails were not received for almost 4 hours and then without any actions, it started.
this sounds like there are one or more mail servers between SC and your mail servers which have some issues and cannot deliver mails online. They just queued those and send those later on when temporary resource issues have fixed. In old days that was quite common situation when servers and users has more limited quotas etc.
If it was a case of queueing, we would have received all the hourly email alerts once the temp resource issues were fixed – but it was not the case – we completely missed the emails (and not received lately).
Also, there were different email servers affected and only Splunk emails were not being received – without making any changes on these email servers, we started receiving emails at around 15:00 BST – the issue was only observed between 11:00-15:00 BST on 31st Jul.
Do you suggest if anything more specific that I should check?