Getting Data In

Error : Bug during applyPendingMetadata, header processor does not own the indexed extractions confs

Path Finder

While indexing csv files, splunk is not indexing some of my csv files showing below error in Splunkd.log

Error : 01-15-2017 21:40:22.148 -0800 ERROR TailReader - Ignoring path="/opt/script_output_data/folder1/folder2/file_name_01152017_21_40_18.csv" due to: Bug during applyPendingMetadata, header processor does not own the indexed extractions confs.

All for these files are 117KB in size and I am creating the CSV on linux using command -

 ssh admin@machine1 "some command" > /opt/script_output_data/folder1/folder2/file_name_`date +\%m\%d\%Y_\%H_\%M_\%S`.csv

How can I get rid of this error ? And Why am I getting this error ?

my inputs.conf (on forwarder)

[monitor:///opt/script_output_data/folder1]
disabled = false
host_segment = 4
index = index1
sourcetype = custom_sourcetype_csv
initCrcLength = 2048
_TCP_ROUTING = indexer_machine

my props.conf (on both indexer & forwarder)

[custom_sourcetype_csv]
DATETIME_CONFIG = CURRENT
EXTRACT-Timestamp_extraction_cdot = \/opt\/script_output_data\/folder1\/[\w-\d.]+\/folder2[\w\d_]+_(?<mon>\d{2})(?<date>\d{2})\d{2}(?<year>\d{2})_(?<hr>\d{2})_(?<min>\d{2})_(?<sec>\d{2})\.csv in source
HEADER_FIELD_LINE_NUMBER = 3
INDEXED_EXTRACTIONS = csv
KV_MODE = none
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = false
category = Structured
description = Some description
disabled = false
pulldown_type = true
0 Karma
1 Solution

SplunkTrust
SplunkTrust

Hi, I assume you are a using a universal forwarder. Is that correct?
If not, I'll convert my answer to a comment.

If so, you have to remove the extraction from your props.conf on your forwarder. Universal Forwarders cannot extract fields (they can only filter events). To extract fields with a forwarder you would need a heavy forwarder.

Btw, I don't see why you are specifying _TCP_ROUTING here when you only specify one indexer or indexer cluster, usually you define your group in the outputs.conf if you don't want to separate your data streams.

View solution in original post

SplunkTrust
SplunkTrust

Hi, I assume you are a using a universal forwarder. Is that correct?
If not, I'll convert my answer to a comment.

If so, you have to remove the extraction from your props.conf on your forwarder. Universal Forwarders cannot extract fields (they can only filter events). To extract fields with a forwarder you would need a heavy forwarder.

Btw, I don't see why you are specifying _TCP_ROUTING here when you only specify one indexer or indexer cluster, usually you define your group in the outputs.conf if you don't want to separate your data streams.

View solution in original post

Path Finder

Yes its a universal forwarder. But I have defined the props.conf on Indexer as well.

0 Karma

SplunkTrust
SplunkTrust

Ah, it's alright then. Yeah, the EXTRACT is correct on the indexer but not on the universal forwarder's props.conf. Did you try to remove it there?

0 Karma

Path Finder

Yeah I just removed that stanza and restart splunk. Will it index the existing files, or just the new ones ?

0 Karma

SplunkTrust
SplunkTrust

That depends on whether the files have already been indexed or not. If not, they should be indexed then. If they did, they only get indexed again when they change.

0 Karma

Path Finder

Okay, so 2-3 days old files, which didn't get indexed because of this error, should get indexed now automatically, right ?

0 Karma

SplunkTrust
SplunkTrust

Right. And did it solve your problem?

0 Karma

Path Finder

Yeah.. Thanks ! Problem solved .

0 Karma

Path Finder

abhinav_maxonic
Can you explain how you were able to resolve this error ?

Thank you.
Sunita

0 Karma

Path Finder

I used _TCP_ROUTING to filer and route data. Since other data is also being forwarded from this forwarder to other indexer.

0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!