I have an alert configured in Splunk which should send the email when the alert is triggered.
The alert is being added to the list of triggered alerts, but the email is not being sent. In logs I see the following errors:
08-09-2022 13:30:14.510 +0200 ERROR ScriptRunner [26046 AlertNotifierWorker-0] -
stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/mycommunity/bin/se
ndemail.py "results_link=https://mysplunk.example.com/app/mycommunity/@go?sid=sc
heduler__d051859__mycommunity__RMD540c2da3bac08625c_at_1660044600_69194" "ssnam
e=MYBLOG PROD Broken Requests" "graceful=True" "trigger_time=1660044614" result
s_file="/opt/splunk/var/run/splunk/dispatch/scheduler__d051859__mycommunity__RM
D540c2da3bac08625c_at_1660044600_69194/results.csv.gz" "is_stream_malert=False"'
: File "/opt/splunk/etc/apps/mycommunity/bin/sendemail.py", line 111
08-09-2022 13:30:14.510 +0200 ERROR ScriptRunner [26046 AlertNotifierWorker-0] -
stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/mycommunity/bin/se
ndemail.py "results_link=https://mysplunk.example.com/app/mycommunity/@go?sid=sc
heduler__d051859__mycommunity__RMD540c2da3bac08625c_at_1660044600_69194" "ssnam
e=MYBLOG PROD Broken Requests" "graceful=True" "trigger_time=1660044614" result
s_file="/opt/splunk/var/run/splunk/dispatch/scheduler__d051859__mycommunity__RM
D540c2da3bac08625c_at_1660044600_69194/results.csv.gz" "is_stream_malert=False"'
: except Exception, e:
08-09-2022 13:30:14.510 +0200 ERROR ScriptRunner [26046 AlertNotifierWorker-0] -
stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/mycommunity/bin/sendemail.py "results_link=https://mysplunk.example.com/app/mycommunity/@go?sid=scheduler__d051859__mycommunity__RMD540c2da3bac08625c_at_1660044600_69194" "ssname=MYBLOG PROD Broken Requests" "graceful=True" "trigger_time=1660044614" results_file="/opt/splunk/var/run/splunk/dispatch/scheduler__d051859__mycommunity__RMD540c2da3bac08625c_at_1660044600_69194/results.csv.gz" "is_stream_malert=False"': ^
08-09-2022 13:30:14.510 +0200 ERROR ScriptRunner [26046 AlertNotifierWorker-0] - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/mycommunity/bin/sendemail.py "results_link=https://mysplunk.example.com/app/mycommunity/@go?sid=scheduler__d051859__mycommunity__RMD540c2da3bac08625c_at_1660044600_69194" "ssname=MYBLOG PROD Broken Requests" "graceful=True" "trigger_time=1660044614" results_file="/opt/splunk/var/run/splunk/dispatch/scheduler__d051859__mycommunity__RMD540c2da3bac08625c_at_1660044600_69194/results.csv.gz" "is_stream_malert=False"': SyntaxError: invalid syntax
08-09-2022 13:30:14.512 +0200 ERROR script [26046 AlertNotifierWorker-0] - sid:scheduler__d051859__mycommunity__RMD540c2da3bac08625c_at_1660044600_69194 External search command 'sendemail' returned error code 1. .
The error message is very ambiguous, so I don't know what causes this error. It seems like a bug to me. Any ideas?
Environment: Splunk 8.2.7
OS: SLES 15 SP3
I don't know what the exact problem was, but the root cause of it was a cloned search app. User's alert was created in a separate app which was cloned from Splunk 7.0.0 search app, so it contained default with configs from that version of Splunk. After I've upgraded Splunk to 8.2, this legacy configuration started causing troubles because Splunk 7 and 8 are very different.
Solution: I've deleted default dir from the app. Another step was to copy $SPLUNK_HOME/etc/apps/search/default/data/ui/nav/default.xml to $SPLUNK_HOME/etc/apps/<yourapp>/local/data/ui/nav/default.xml. That allowed me to preserve tabs from the search app.
I don't know what the exact problem was, but the root cause of it was a cloned search app. User's alert was created in a separate app which was cloned from Splunk 7.0.0 search app, so it contained default with configs from that version of Splunk. After I've upgraded Splunk to 8.2, this legacy configuration started causing troubles because Splunk 7 and 8 are very different.
Solution: I've deleted default dir from the app. Another step was to copy $SPLUNK_HOME/etc/apps/search/default/data/ui/nav/default.xml to $SPLUNK_HOME/etc/apps/<yourapp>/local/data/ui/nav/default.xml. That allowed me to preserve tabs from the search app.
There should be more information about the error in python.log.
Unfortunately, no. There are no errors or any other records close to the log records from the splunkd.log
Closest records are 2022-08-10 13:02:34,782 and 2022-08-10 14:01:08,141. They don't seem to be related.