After completing an upgrade on Splunk to 6.5.2 and Alert_Manager 2.1.4/TA 2.1.1 in our Production instance, we are no longer seeing any new alerts posted to the alert index.
I created an alert triggered every minute. Incident Settings shows the "alert" title. However nothing new is added to the index.
The index does exist: index=alerts | stats count by date_mday
2 41
3 161
4 128
5 101
The 6.5.2 and Alert Manager 2.1.4 on our system QA is working as expected.
Watching alert_manager.log shows alert_manager.py fires:
24 results." (alert_manager.py:360)
2017-04-06 13:49:16,858 INFO pid="8842" logger="alert_manager" message="Incident status after suppresion check: new" (alert_manager.py:422)
From the below error, is there concern about using index "index" (the app is using alerts for the index)?
source = 'alert_handler.py', index=index
splunkd.log shows:
04-06-2017 13:50:29.826 -0600 INFO sendmodalert - Invoking modular alert action=alert_manager for search="Test Alert" sid="scheduler__admin_ZHZhLW9wcy1zdXBwb3J0__RMD5ada861e3c4e7d72f_at_1491508200_114" in app="dva-ops-support" owner="admin" type="saved"
04-06-2017 13:50:30.199 -0600 ERROR sendmodalert - action=alert_manager STDERR - Traceback (most recent call last):
04-06-2017 13:50:30.199 -0600 ERROR sendmodalert - action=alert_manager STDERR - File "/apps/esb/opt/splunk/6.5.2/search-head/etc/apps/alert_manager/bin/alert_manager.py", line 427, in
04-06-2017 13:50:30.199 -0600 ERROR sendmodalert - action=alert_manager STDERR - createIncidentChangeEvent(event, metadata['job_id'], settings.get('index'))
04-06-2017 13:50:30.199 -0600 ERROR sendmodalert - action=alert_manager STDERR - File "/apps/esb/opt/splunk/6.5.2/search-head/etc/apps/alert_manager/bin/alert_manager.py", line 157, in createIncidentChangeEvent
04-06-2017 13:50:30.199 -0600 ERROR sendmodalert - action=alert_manager STDERR - input.submit(event, hostname = socket.gethostname(), sourcetype = 'incident_change', source = 'alert_handler.py', index=index)
04-06-2017 13:50:30.199 -0600 ERROR sendmodalert - action=alert_manager STDERR - File "/apps/esb/opt/splunk/6.5.2/search-head/lib/python2.7/site-packages/splunk/input.py", line 180, in submit
04-06-2017 13:50:30.199 -0600 ERROR sendmodalert - action=alert_manager STDERR - raise splunk.RESTException, (serverResponse.status, msg_text)
04-06-2017 13:50:30.199 -0600 ERROR sendmodalert - action=alert_manager STDERR - splunk.RESTException: [HTTP 403] ['message type=WARN code=None text=insufficient permission to access this resource;']
04-06-2017 13:50:30.214 -0600 INFO sendmodalert - action=alert_manager - Alert action script completed in duration=387 ms with exit code=1
04-06-2017 13:50:30.214 -0600 WARN sendmodalert - action=alert_manager - Alert action script returned error code=1
04-06-2017 13:50:30.214 -0600 ERROR sendmodalert - Error in 'sendalert' command: Alert script returned error code 1.
04-06-2017 13:50:30.214 -0600 ERROR SearchScheduler - Error in 'sendalert' command: Alert script returned error code 1., search='sendalert alert_manager results_file="/apps/esb/opt/splunk/6.5.2/search-head/var/run/splunk/dispatch/scheduler__admin_ZHZhLW9wcy1zdXBwb3J0__RMD5ada861e3c4e7d72f_at_1491508200_114/results.csv.gz" results_link="http://company:3000/app/dva-ops-support/@go?sid=scheduler__admin_ZHZhLW9wcy1zdXBwb3J0__RMD5ada861e3c4e7d72f_at_1491508200_114"'
Well it's working now. Two issues.
Well it's working now. Two issues.
After further searching I found this article:
minimum-permissions-required-for-using-http-simple-receiver.html
edit_tcp was disabled for the user.
After enabling I've a new error message:
04-06-2017 15:24:22.344 -0600 ERROR sendmodalert - action=alert_manager STDERR - splunk.RESTException: [HTTP 400] ["message type=WARN code=None text=supplied index 'alerts' missing;"]