Splunk alert email are not sent after upgrading to 6.3 . Tried /local/alerts_actions.conf to have the right mailserver and from values. When I try SendMail command it always defaults to the email from value in the default/alerts_actions.conf . See the error in python.log : 2016-01-16 09:37:47,071 Central Standard Time ERROR sendemail:357 - Sending email. subject="Here is an email notification", results_link="None", recipients="xxxxx"
Sorry to not post the resolution soon enough. we found that at one point we used Search head pooling. Then disabled it at later time and moved the Shared folder to a local location on the active server. During the upgrade this share folder was not upgraded. We upgraded it few days ago and alert emails started flowing. Thank you for the tips!
Invalid sender is the error. You have specified an invalid email address as the sender. Splunk@searcheadservername is missing the TLD (top level domain (.com,.net, etc)).
Also alert_actions.conf is a "per app" setting. So you can override the default with an alert_actions.conf in your search app for example.
Splunk@searcheadservername - This is what is shown in the error. My setting in the ..conf files is in the right format - Splunk@xxxx.com
Email properties from web ie, Settings -> Email settings and see if the values are updated or not: Settings has right values - Splunk@xxxx.com
Copied the alert_actions.conf to Search app level too and restarted Splunkd. Still same error when trying the SendEmail command.
Unsure from where Splunk@searcheadservername is being picked up..
Use this command to see where if at all splunk@searchheadservername is being picked up:
./splunk cmd btool alert_actions list --debug
Is this a search head cluster?
If so the process of updating is very specific. The only way I know to set alert_actions.conf on search head cluster is to open email settings in the admin panel in the UI from any of the search heads (you have to accept the warning message about showing all menu items), and then the conf gets distributed to the other members of the search head cluster. Distributing an app to change alert_actions.conf doesnt work in a SHC (for the server settings ie server name, from email, user/pass) from my recent experience. I believe some settings like the limits, can be set per app in SHC but the core config specifying from and server name port, etc... can only be specified via web ui on SHC (again AFAIK).
Not using Serach head cluster. btool on the search head gives me a only one instance of "from", that has right value -
C:\Program Files\Splunk\etc\system\local\alert_actions.conf
[email]
.
.
from = xxx@xxxxx.com
Some times the mail server will only accept emails on it's local domain, or from authenticated users, etc. You're not using your corporate email address when you use the sendemail
command and then using a different from address in your alert_actions.conf are you?
example:
mailserver.xyz.com will happily receive & relay emails from "splunk@xyz.com" but not from "splunk@anotherdomain.com" and will spit out a invalid sender error.
Other times, you must authenticate with the server first. Although I see no reason why it would work with sendmail vs alert_actions UNLESS you were using different ones during testing which clearly you are not,... but sometimes i like to type what im thinking and hope it rings a bell on your end, etc....
you are right jkat54. I am using mailserver.xyz.com and email from as splunk@anotherdomain.com . I am able to use sendemail with exclusively putting in "from=splunk@anotherdomain.com" .
Its still a mistery why alerts_actions value is not being picked up by scheduled searches, alerts and also from Sendemail when from is not exclusively mentioned 😞
May be Sendemail is sending my authenticated information to the mail server opposed to alerts_actions. I will investigate in that direction
When you upgraded did you overwrite your Splunk direcotry or just copy it in. Example: cp newsplunk/* /opt/oldsplunk/
I'm curious if you have the old sendemail.py code because you didn't do a recursive copy cp -rf
we overwrote the Splunk directory with upgrade. we have new sendemail.py. Does the server require Java JRE for alert emails to work?
Can you compare sendemail.py from new copy of splunk and your current one on the filesystem to see if they are the same size in bytes?
sendemail.py is of correct size. we opened a ticket with Splunk support. will update you on resolution details soon
No, its python based when it sends email.
Yes. I tried using -
index=main | head 5 | sendemail to=xxxx server=xxxx subject="Here is an email notification" message="This is an example message" sendresults=true inline=true format=raw sendpdf=true
this throws error-
command="sendemail", (553, 'Invalid sender', 'splunk@searchheadservername') while sending mail to: xxx@xxx.xxx
But when I explicitly say from it sends email fine -
index=main | head 5 | sendemail to=xxxx server=xxxx from=yyyy subject="Here is an email notification" message="This is an example message" sendresults=true inline=true format=raw sendpdf=true
I did restart Splunk few times, after setting up the right "from" and email server values. I have to try btool yet.
Also try setting the email properties from web ie, Settings -> Email settings and see if the values are updated or not?
Have you tried sending it with specifying the server parameter in search itself?
After changing the values in the configuration , have you restarted splunk?
Can you try btool on the config file and see from where the value is being picked up?
i tried restart. btool on Searchhead server is showing the correct from value. Not sure why the Sendemail command is defaulting to a from that does not exist