Getting Data In

Transforms not being applied to _json sourcetype

jocobknight
Explorer

I've got a dedicated Heavy Forwarder that I am trying to use to ship logs out via syslog:

outputs.conf

 

[syslog:outgoing]
server = receiver.vm.com:5140
type = tcp
priority = <110>
maxEventSize = 25600

 

But I also want to include Splunk metadata fields in the event as it gets shipped:

props.conf

 

[host::*]
TRANSFORMS-Syslog_Items = \
    Syslog_Epoch, Syslog_SourceType, Syslog_Source, Syslog_Host, Syslog_Index
priority = 1

 

 transforms.conf

 

[Syslog_Index]
SOURCE_KEY = _MetaData:Index
REGEX = ^(.*)$
FORMAT = toindex=$1 $0
DEST_KEY = _raw

[Syslog_Host]
SOURCE_KEY = MetaData:Host
REGEX = ^host::(.*)$
FORMAT = sourcehost=$1 $0
DEST_KEY = _raw

[Syslog_SourceType]
SOURCE_KEY = MetaData:Sourcetype
REGEX = ^sourcetype::(.*)$
FORMAT = sourcetype=$1 $0
DEST_KEY = _raw

[Syslog_Source]
SOURCE_KEY = MetaData:Source
REGEX = ^source::(.*)$
FORMAT = source=$1 $0
DEST_KEY = _raw

[Syslog_Epoch]
SOURCE_KEY = _time
REGEX = ^(.*)$
FORMAT = epoch=$1 $0
DEST_KEY = _raw

 

 

All this works for most data:

Example Data that worked:

 

<110> generichostname toindex=os sourcehost=generichostname source=df sourcetype=df epoch=1621371418 Filesystem...

 

 

But I've come to realize that structured data (specifically _json sourcetype data) doe not work:

Example Data that failed:

 

<110> generichostname {"hostname": "generichostname", "ipaddress": "10.x.x.x"}

 

 

I have been trying different modifications to the _json sourcetype configuration, and I even went so far as to erase the _json sourcetype altogether, but nothing works. If the data is json, then the transforms simply do not get applied. How do I fix this?

Again, this is a dedicated Heavy Forwarder with the sole duty to ship out syslog. Universal Forwarders are going to be optionally given this HF as an output destination for any logs that we want shipped out via syslog. So I don't care how badly or how weirdly I change the parsing configs on this Splunk instance. I just want to indiscriminately insert metadata in front of ALL logs that this HF receives and ships out.

Any insight would be very appreciated! Thanks in advance!

Labels (3)
0 Karma
1 Solution

jocobwknight
Explorer

So, as it turns out (after a LOT of testing, see the red lines...), a sourcetype with "INDEXED_EXTRACTIONS" specified at any point in the pipeline will disqualify that event from transforms on the _raw for the rest of pre-index lifespan of that event:

indexed_extractions.png

I'm talking with support now to submit a feature request for adding something like a "ENABLE_TRANSFORMS" options to props.conf that gets read during the typingQueue.

Until that gets added, the inarguably singular option I have right now is to modify the default config and set up a system to ensure those modifications don't get overwritten.

And unfortunately this means that the forwarders I do this to will be only be able to send to the syslog shipper. And from there I'll have to fork it back to the main indexer. If I want that data in Splunk with proper extracting done.

Other option is to force local processing on the UF, but that's not ideal because I need to minimize performance drain from the agents.

View solution in original post

0 Karma

jocobwknight
Explorer

So, as it turns out (after a LOT of testing, see the red lines...), a sourcetype with "INDEXED_EXTRACTIONS" specified at any point in the pipeline will disqualify that event from transforms on the _raw for the rest of pre-index lifespan of that event:

indexed_extractions.png

I'm talking with support now to submit a feature request for adding something like a "ENABLE_TRANSFORMS" options to props.conf that gets read during the typingQueue.

Until that gets added, the inarguably singular option I have right now is to modify the default config and set up a system to ensure those modifications don't get overwritten.

And unfortunately this means that the forwarders I do this to will be only be able to send to the syslog shipper. And from there I'll have to fork it back to the main indexer. If I want that data in Splunk with proper extracting done.

Other option is to force local processing on the UF, but that's not ideal because I need to minimize performance drain from the agents.

0 Karma

jocobknight
Explorer

In case this looks weird I happen to be merging my accounts right now. I swear I'm not talking to myself! 😐

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...