Alerting

Problem logging alerts to new index with key fields from original event, JSON parsing issue? Other ideas?

joconnor
Explorer

I have several alerts set up for a series of events. When an alert fires I want to log it to a new index. The problem when I do that is the new index isn't an index of the original events, but rather of the alerts. So I'm trying to use the "Log Event" action in the alert to add it to the index and at the same time use the "Event" field of the "Log Event" action to include information about the offending event in JSON format with token variables from the $result$. So my Event field is literally: 

 

 

 

{"field1":"$result.value1$", "field2":"$result.value2$", ...}

 

 

 

This was "kind of" working but some fields were causing issues with the JSON parsing if they included line breaks, etc. I started tinkering with the props.conf file, as well as the alert query string to try and do some pre-processing to the data to prevent it from breaking JSON parsing but now it is just completely broken and I don't understand why. I've reverted the alerts to be working as they always were. My props.conf file is stripped down to simply include:

"[my_sourcetype]
INDEXED_EXTRACTIONS=json
TRUNCATE=0
SHOULD_LINEMERGE=true"

I have tried with and without KV_MODE. Sometimes it works, sometimes it doesn't. 

At this point I'm a little bit lost on what to try and or explore. Maybe there's a different approach to what I'm trying to do? I want Splunk to treat the data as fields so I can reference them directly from SOAR / Phantom. Any and all ideas welcome. Thank you!

Labels (1)
0 Karma

PickleRick
SplunkTrust
SplunkTrust

I'd suspect that most probable culprits are line breaks and quotation marks.

https://docs.splunk.com/Documentation/Splunk/9.0.1/Alert/EmailNotificationTokens#Result_tokens

I don't see any mention about the fields being escaped in any way. So if you get, for example:

{ field1: "My result is: "whatever"!"}

after substitiution of the tokens, it does not constitute a correct json document.

You could try to escape "tricky" characters in your alert output so that the log event receives "safe" strings.

0 Karma

johnhuang
Motivator

Instead of using the "Log Event" action, you could try using the collect command to write the results to the summary index. This allows you to specify the kvfields.

 

| collect index="<summary_index_name>" source="source_name" sourcetype="stash"

 

 

0 Karma

joconnor
Explorer

I appreciate the suggestion. I will try it out and follow up shortly, but will including this in the alert search query cause it to log every time the alert is run? If I have a cron schedule running once a minute, I would prefer to have a triggered_index of only events I'm looking for.

0 Karma

johnhuang
Motivator

As long as you run collect at the end of the search, the output should match the data available to your alert -- if search comes back with no data then no events will be saved to the summary index. Obviously if you have any suppressions or thresholds in your alert trigger, then this would not achieve what you want.

 

0 Karma

joconnor
Explorer

So I was able to get this all going and I'm still running into a familiar problem. If my alert triggers off a windows event log, it automatically parses all 40 fields or whatever and includes them as fields. When it comes to the summary index the raw data will be the body but Splunk will not parse it into fields.  I've tried utilizing the 'marker' option but for whatever reason I'm not able to get the summary index to parse into fields. 

My default main index does it perfectly, yet I can't find any settings in the props.conf file or anywhere else that would show me how it parses everything great so I can replicate it for the summary index. Am I missing something obvious?

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...