Alerting

Alert action "log event" not writing to index and receiving error message

KISHORE_LK
Explorer

I have set the alert to write the event to the index using the 'log event' action.

I am writing to a custom index named 'notable'. I have made sure that the index has been created on the search head and also added the capability 'edit_tcp' to the user role who owns the alert.

But when the alert is trying to execute the action, we are seeing the following error messages in Splunk's internal logs.
No events are being written to the index. I am not able to find any documentation on what the error code 2 means in this scenario.

If anyone has any idea, please let me know.

11-21-2019 07:00:17.684 -0600 WARN  sendmodalert - action=logevent - Alert action script returned error code=2
11-21-2019 07:00:17.684 -0600 INFO  sendmodalert - action=logevent - Alert action script completed in duration=97 ms with exit code=2
11-21-2019 07:00:17.678 -0600 ERROR sendmodalert - action=logevent STDERR -  Error sending receiver request: HTTP Error 400: Bad Request

Sivrat
Path Finder

@swebb07g @KISHORE_LK - Did you find a solution?

I'm seeing the same issue, and I've tried the following:

  • Verified target index exists, created both in the SHC and on the Indexer Cluster
  • Verified account has edit_tcp capability
  • Tried sending to main instead of target index
  • Sent events to main using curl to POST events to "https://localhost:8089/services/receivers/simple" endpoint

 

I'm honestly out of ideas, and this is starting to feel like an XKCD Denvercoder situation 

swebb07g
Path Finder

I'm very familiar with that particular comic lol.

I'd recommend inspecting the code for the sendmodalert script (I believe it's supposed to be python??) - if you have direct access to the Splunk Server and are comfortable with it.

What I would do is I would try to enhance the error output to include more details - like the domain and/or URL that is causing the error. 

Never got those particular errors resolved. I thought they were related to an issue I was having where the splunk server stopped attempting to send any alerts, but that appeared to be caused by something else.

Sivrat
Path Finder

Finally got it working. Apparently I wasn't as thorough as I expected before.

Core issue I had was that the index actually didn't exist on the SHC, and that apparently when I checked for both that and originally sending to main I was just wrong. 

I did try to add in some more details in the logevent.py, to add in the full response and not just the error code, but never got that working properly. The script would run, but never actually spit out my added stderr message to splunkd.log like it did the error code. 

0 Karma

swebb07g
Path Finder

Can you provide an update on how you resolved this issue?

0 Karma
Take the 2021 Splunk Career Survey

Help us learn about how Splunk has
impacted your career by taking the 2021 Splunk Career Survey.

Earn $50 in Amazon cash!