I have set the alert to write the event to the index using the 'log event' action.
I am writing to a custom index named 'notable'. I have made sure that the index has been created on the search head and also added the capability 'edit_tcp' to the user role who owns the alert.
But when the alert is trying to execute the action, we are seeing the following error messages in Splunk's internal logs.
No events are being written to the index. I am not able to find any documentation on what the error code 2 means in this scenario.
If anyone has any idea, please let me know.
11-21-2019 07:00:17.684 -0600 WARN sendmodalert - action=logevent - Alert action script returned error code=2 11-21-2019 07:00:17.684 -0600 INFO sendmodalert - action=logevent - Alert action script completed in duration=97 ms with exit code=2 11-21-2019 07:00:17.678 -0600 ERROR sendmodalert - action=logevent STDERR - Error sending receiver request: HTTP Error 400: Bad Request
Hi encountered the same issue on my architecture (SH cluster and IDX cluster).
I resolved the problem deploying the index configuration on both search head and indexer.
In distributed environment the index must be exists on search head and indexer.
I'm honestly out of ideas, and this is starting to feel like an XKCD Denvercoder situation
I'm very familiar with that particular comic lol.
I'd recommend inspecting the code for the sendmodalert script (I believe it's supposed to be python??) - if you have direct access to the Splunk Server and are comfortable with it.
What I would do is I would try to enhance the error output to include more details - like the domain and/or URL that is causing the error.
Never got those particular errors resolved. I thought they were related to an issue I was having where the splunk server stopped attempting to send any alerts, but that appeared to be caused by something else.
Finally got it working. Apparently I wasn't as thorough as I expected before.
Core issue I had was that the index actually didn't exist on the SHC, and that apparently when I checked for both that and originally sending to main I was just wrong.
I did try to add in some more details in the logevent.py, to add in the full response and not just the error code, but never got that working properly. The script would run, but never actually spit out my added stderr message to splunkd.log like it did the error code.