Alert action "log event" not writing to index and receiving error message


I have set the alert to write the event to the index using the 'log event' action.

I am writing to a custom index named 'notable'. I have made sure that the index has been created on the search head and also added the capability 'edit_tcp' to the user role who owns the alert.

But when the alert is trying to execute the action, we are seeing the following error messages in Splunk's internal logs.
No events are being written to the index. I am not able to find any documentation on what the error code 2 means in this scenario.

If anyone has any idea, please let me know.

11-21-2019 07:00:17.684 -0600 WARN  sendmodalert - action=logevent - Alert action script returned error code=2
11-21-2019 07:00:17.684 -0600 INFO  sendmodalert - action=logevent - Alert action script completed in duration=97 ms with exit code=2
11-21-2019 07:00:17.678 -0600 ERROR sendmodalert - action=logevent STDERR -  Error sending receiver request: HTTP Error 400: Bad Request

Path Finder

Hi encountered the same issue on my architecture (SH cluster and IDX cluster).
I resolved the problem deploying the index configuration on both search head and indexer.
In distributed environment the index must be exists on search head and indexer.

0 Karma

Path Finder

@swebb07g @KISHORE_LK - Did you find a solution?

I'm seeing the same issue, and I've tried the following:

  • Verified target index exists, created both in the SHC and on the Indexer Cluster
  • Verified account has edit_tcp capability
  • Tried sending to main instead of target index
  • Sent events to main using curl to POST events to "https://localhost:8089/services/receivers/simple" endpoint


I'm honestly out of ideas, and this is starting to feel like an XKCD Denvercoder situation 

Path Finder

I'm very familiar with that particular comic lol.

I'd recommend inspecting the code for the sendmodalert script (I believe it's supposed to be python??) - if you have direct access to the Splunk Server and are comfortable with it.

What I would do is I would try to enhance the error output to include more details - like the domain and/or URL that is causing the error. 

Never got those particular errors resolved. I thought they were related to an issue I was having where the splunk server stopped attempting to send any alerts, but that appeared to be caused by something else.

Path Finder

Finally got it working. Apparently I wasn't as thorough as I expected before.

Core issue I had was that the index actually didn't exist on the SHC, and that apparently when I checked for both that and originally sending to main I was just wrong. 

I did try to add in some more details in the, to add in the full response and not just the error code, but never got that working properly. The script would run, but never actually spit out my added stderr message to splunkd.log like it did the error code. 

0 Karma

Path Finder

Can you provide an update on how you resolved this issue?

0 Karma
Get Updates on the Splunk Community!

Streamline Data Ingestion With Deployment Server Essentials

REGISTER NOW!Every day the list of sources Admins are responsible for gets bigger and bigger, often making the ...

Remediate Threats Faster and Simplify Investigations With Splunk Enterprise Security ...

REGISTER NOW!Join us for a Tech Talk around our latest release of Splunk Enterprise Security 7.2! We’ll walk ...

Introduction to Splunk AI

WATCH NOWHow are you using AI in Splunk? Whether you see AI as a threat or opportunity, AI is here to stay. ...