Getting Data In

Why are the transforms on indexer props being broken by the extractions on my forwarder's props?

thisissplunk
Builder

Whenever I enable this EXTRACTION stanza on my universal forwarder, my TRANSFORM extraction stops working on my indexer:

[web_app_logs]
NO_BINARY_CHECK = 1
INDEXED_EXTRACTIONS = TSV
PREAMBLE_REGEX = ^#.*
FIELD_DELIMITER=\t

The indexer props with the TRANSFORM line that stops working (I added the input time stuff as redundancy during testing):

[web_app_logs]
TRANSFORMS-AutoSourceType = AutoSourceType
NO_BINARY_CHECK = 1
INDEXED_EXTRACTIONS = TSV
PREAMBLE_REGEX = ^#.*
FIELD_DELIMITER=\t
SHOULD_LINEMERGE = False
MAX_TIMESTAMP_LOOKAHEAD = 50
TZ = UTC
TIME_FORMAT = %s.%6Q
TRUNCATE = 250000

The forwarder's props extraction stanza should be fine according to this, and it does indeed work by parsing my tsv files correctly. The specific commands for field extractions can be found here. For context the TRANSFORM is setting the events to new sourcetypes depending on a string found within them.

What am I missing? Why is my forwarder's props.conf interferring with my indexer's props.conf stuff that comes after input time stuff? Does one override the other? I tried putting my TRANSFORM into the forwarder's props.conf but that doesn't work either (as expected since it's not a heavy forwarder).

0 Karma
1 Solution

thisissplunk
Builder

What I was told is happening by a Splunk expert is that the indexed extractions on the forwarder's props.conf are messing with the data in a pseudo indexing sort of way, therefore the transform isn't working as expected on the indexer side. Specifically it has to do with messing with the meta data fields, which I am trying to use in my transform (source).

We think the reasoning is this: http://docs.splunk.com/Documentation/Splunk/6.0/Data/Extractfieldsfromfileheadersatindextime#Caveats

View solution in original post

thisissplunk
Builder

What I was told is happening by a Splunk expert is that the indexed extractions on the forwarder's props.conf are messing with the data in a pseudo indexing sort of way, therefore the transform isn't working as expected on the indexer side. Specifically it has to do with messing with the meta data fields, which I am trying to use in my transform (source).

We think the reasoning is this: http://docs.splunk.com/Documentation/Splunk/6.0/Data/Extractfieldsfromfileheadersatindextime#Caveats

gjanders
SplunkTrust
SplunkTrust

There are a few parameters that can cause the universal forwarder to enable queues beyond just the parsing queue, I believe the indexed extractions are one of those situations that can cause this.

For example in a previous answer I eventually found FIELD_HEADER_REGEX enables more queues on the universal forwarder and after using that parameter the line breaking/time parsing started to occur on the universal forwarder.

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Tech Talk Recap | Mastering Threat Hunting

Mastering Threat HuntingDive into the world of threat hunting, exploring the key differences between ...

Observability for AI Applications: Troubleshooting Latency

If you’re working with proprietary company data, you’re probably going to have a locally hosted LLM or many ...

Splunk AI Assistant for SPL vs. ChatGPT: Which One is Better?

In the age of AI, every tool promises to make our lives easier. From summarizing content to writing code, ...