Can't get Alert Manager to display an alert in Incident Posture. Trying to get to work for POC
From looking at logs (python.log) it appears that when alert is fired it is executing alert_handler.py, but nothing is getting written to "alerts" index (zero events). I am assuming this why nothing appears under incident posture or in Incident Overview report.
I figured I have missed something simple. I have installed alert manager twice. Is there any place else I can look or check?
Tada....Got an alert to appear in incident posture
FYI I am running Splunk on Win 2012.
It is installed on the D:\ drive
I created a folder call tmp (d:\tmp) and gave everyone full control (not sure if I needed to make it that open)
It created two files stderr and stdout (both zero bytes)
Other FYI lines 23 and 24 were already uncommented
Going to do some more testing but looks good so far.
Hi,
I had the same difficulties on a Linux search head.
The solution was to modify the inputs.conf as follow
[script://.\bin\alert_manager_scheduler.path] changed to [script://./bin/alert_manager_scheduler.path]
Also, the index "alerts" was created on the cluster, but it was not enough. It was required to create an empty one on the search head as well, as you would do on heavy forwarders.
Rgds
Dan
Tada....Got an alert to appear in incident posture
FYI I am running Splunk on Win 2012.
It is installed on the D:\ drive
I created a folder call tmp (d:\tmp) and gave everyone full control (not sure if I needed to make it that open)
It created two files stderr and stdout (both zero bytes)
Other FYI lines 23 and 24 were already uncommented
Going to do some more testing but looks good so far.
having the same issue, what did you do to fix the issue
Hi
Can you double-check the alert manager's main logfile at $SPLUNK_HOME/var/log/splunk/alert_manager.log if there are any entries? If possible, can you provide the whole log file to me e.g. via https://gist.github.com/?
Thanks
Sorry there is no alert_manager.log in this location.
There are 3 other logs:
alert_manager_helpers_controller.log
alert_manager_scheduler.log
alert_manager_settings_controller.log
Ok, empty alert_handler.log means the alert_handler.py didn't even start. python.log isn't really helpful, try these steps:
Can you check splunkd.log, if the script fails to run?
Search for
index=_internal source="*splunkd.log" alert_handler.py exited with status code:
And check if there is any python exception in splunkd.log after the message mentioned above.
Also if possible, paste the saved search stanza of your alert from savedsearches.conf, just to double check.
Update: Please also double-check if the option "List in Triggered Alerts" is activated in the alert settings!
In the splunkd.log I am finding the following entry with noting related before or after:
04-13-2015 14:00:02.482 -0700 ERROR script - sid:scheduler__nmurphy__search__RMD5ada861e3c4e7d72f_at_1428958800_4 command="runshellscript", Script: D:\Program Files\Splunk\bin\scripts\alert_handler.py exited with status code: 1
In savedsearches.conf
[Test Alert]
action.email.reportServerEnabled = 0
action.email.useNSSubject = 1
action.script = 1
action.script.filename = alert_handler.py
alert.digest_mode = 0
alert.severity = 2
alert.suppress = 0
alert.track = 1
counttype = number of events
cron_schedule = 0 * * * *
dispatch.earliest_time = -1h
dispatch.latest_time = now
enableSched = 1
quantity = 0
relation = greater than
request.ui_dispatch_app = search
request.ui_dispatch_view = search
search = index=perfmon sourcetype="Perfmon:FreeDiskSpace" instance="C:" PercentFreeSpace<"75" | dedup host
Also verified that run a script is enabled with alert_handler.py as filename
Created the symbolic link per your instructions
Can you please open alert_handler.py and remove the comment from line 23 and 24? Maybe you have to adjust the path if it's not possible to write to /tmp. Later after the alert should be fired again, check if something has been written to this two files.