Splunk Cloud Platform

monitor SMTP failures

sarit_s6
Engager

Hello
I'm trying to monitor SMTP failures in my Splunk cloud environment. 

I know for sure that at some date we had problem and did not receive any emails but when im running this query : 

 

index=_internal sendemail source="/opt/splunk/var/log/splunk/python.log"

 

I don't see any errors. 
How can I achieve my goal ?

Thanks 

Labels (1)
Tags (1)
0 Karma

livehybrid
Super Champion

Hi @sarit_s6 

SMTP logs arent directly logged into your Splunk Cloud environment, however if you log a support ticket they are able to check the PostMark mail server logs to check if any emails bounced, this could help confirm that 
a) If the alert actually fired correctly from Splunk
b) Email accepted by the mail relay
c) If the relay had any issue sending on to the final destination.

At a previous customer we had a number of issues with the customer email server detecting some of the Splunk Cloud alerts as spam and silently bouncing them.

You can contact Support via https://www.splunk.com/support

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

0 Karma

sarit_s6
Engager

im looking for a solution that i will be able to monitor if emails stopped receiving , not for troubleshoot for specific issue

0 Karma

richgalloway
SplunkTrust
SplunkTrust

If there are no errors on Splunk's end then your email provider should be contacted to find out why the messages were not delivered.  It's possible the message were treated as spam or there was another problem that prevented delivery.

---
If this reply helps you, Karma would be appreciated.

sarit_s6
Engager

we know for sure that Splunk had issue with sending emails during this time so for sure its in splunk's end 

0 Karma

isoutamo
SplunkTrust
SplunkTrust

You can try to send email and then check those events from internal

1st send email e.g.

index=_internal sourcetype=splunkd
| head 1
| sendemail to="your.email@your.domain" subject="testing"

After that you should get at least this event from internal

index=_internal  sourcetype=splunk_python sendemail source="/opt/splunk/var/log/splunk/python.log"

 Of course it needs that your previous command has worked w/o issues.

It needs also access to _internal logs. There could be also need for some capabilities to send email.

0 Karma

sarit_s6
Engager

im getting the log
all the logs are in level INFO
I know for sure that splunk had an issue with sending emails at specific time but i cannot see any logs in _internal

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Have you tried my examples? If you can send email and you have access those internal logs then there are at least one log line. If you cannot see those then you haven't have access to those logs to see it.

2025-06-11 18:39:08,616 +0300 INFO	sendemail:275 - Sending email. sid=1749656347.70143, subject="testing", encoded_subject="testing", results_link="None", recipients="['your.email@your.domain']", server="localhost"

How you are sure that the issue is with splunk? Have you some logs which shows that e.g. alert is fired and it has try to send it via sendemail? For that reason I suggest 1st check that sending email is working and after that start to look why your alerts are not sending it. And quite often then the reason was that alert hasn't fired.

sarit_s6
Engager

Hello

I know for sure that its Splunk end because Splunk told us that they had issue with sending emails

Im getting the logs after running your example 

0 Karma

isoutamo
SplunkTrust
SplunkTrust
You have already case open and ongoing with splunk support, so what you expecting that we can offer to you especially you didn't told this to us?
0 Karma

sarit_s6
Engager

I want to monitor such behavior myself and not count on Splunk to update me when such thing is happening 

0 Karma

livehybrid
Super Champion

Hi @sarit_s6 
I understand, unfortunately access to the relay logs is not possible. 

0 Karma

livehybrid
Super Champion

When you say that its a problem at Splunk end, do you mean with Splunk's relay server or within your own cloud environment? SplunkCloud sends emails to a local relay before being sent out of Splunk's infrastructure. 

Even if your alerts fired successfully, it may not show errors sending the emails in your Splunk _internal logs because the failure happens between Splunkd (your actual Splunk process) and an external dependency. 

As I said, Splunk Support should be able to access their relay logs and validate where the issue is coming from, but either way - it is not possible for you to directly monitor for failures against the remote SMTP service, you might see some errors if your instance is unable to reach the local relay but also not guaranteed. I wasnt able to find any Splunk apps which monitor the local SMTP connection directly.

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

0 Karma
Get Updates on the Splunk Community!

Splunk Answers Content Calendar, July Edition I

Hello Community! Welcome to another month of Community Content Calendar series! For the month of July, we will ...

Secure Your Future: Mastering Upgrade Readiness for Splunk 10

Spotlight: The Splunk Health Assistant Add-On  The Splunk Health Assistant Add-On is your ultimate companion ...

Observability Unlocked: Kubernetes & Cloud Monitoring with Splunk IM

Ready to master Kubernetes and cloud monitoring like the pros? Join Splunk’s Growth Engineering team on ...