I'm attempting to set up a new daily data source which is sent to the indexer through the Splunk Fowarder. Unlike most of the other data sets I've indexed up to this point, this source has a header line on each file that is generated (and the files are rolled daily). I don't want to index the header over and over again, I just want the data. I've looked around and set up my props.conf
and transforms.conf
in a way that seems like it should work.
props.confs
[leads]
INDEXED_EXTRACTIONS=TSV
SHOULD_LINEMERGE=false
TIME_FORMAT=%m/%d/%Y
pulldown_type=1
NO_BINARY_CHECK=1
FIELD_NAMES=Date,Count,Type,ClientID,ProductID,SponsorID,SponsorshipID
TRANSFORMS-tonull = strip_header
transforms.conf
[strip_header]
REGEX = ^D
DEST_KEY = queue
FORMAT = nullQueue
Sample Data Set
Date Count Type ClientID ProductID SponsorID SponsorshipID
03/07/2014 1 PPV webinars 1065768 157 448
When I index the data set manually through a one-time Data Inputs import, I'm able to run searches for Count, ClientID, etc. The fields are extracted.
When the same data set comes through from the Forwarder, no fields are extracted, so then I can't run searches on the fields. BUT! The header is suppressed from being indexed.
What the devil am I missing?
EDIT: It may be worth noting that I'm doing this on the props.conf
and transforms.conf
on the indexer, NOT on the source server.
I'm not sure this is necessarily the correct way to do it, but placing the props.conf
and transforms.conf
changes on the server running the forwarder (in etc/system/local
) worked like a charm.
I'm not sure this is necessarily the correct way to do it, but placing the props.conf
and transforms.conf
changes on the server running the forwarder (in etc/system/local
) worked like a charm.