While indexing csv files, splunk is not indexing some of my csv files showing below error in Splunkd.log
Error : 01-15-2017 21:40:22.148 -0800 ERROR TailReader - Ignoring path="/opt/script_output_data/folder1/folder2/file_name_01152017_21_40_18.csv" due to: Bug during applyPendingMetadata, header processor does not own the indexed extractions confs.
All for these files are 117KB in size and I am creating the CSV on linux using command -
ssh admin@machine1 "some command" > /opt/script_output_data/folder1/folder2/file_name_`date +\%m\%d\%Y_\%H_\%M_\%S`.csv
How can I get rid of this error ? And Why am I getting this error ?
my inputs.conf (on forwarder)
[monitor:///opt/script_output_data/folder1]
disabled = false
host_segment = 4
index = index1
sourcetype = custom_sourcetype_csv
initCrcLength = 2048
_TCP_ROUTING = indexer_machine
my props.conf (on both indexer & forwarder)
[custom_sourcetype_csv]
DATETIME_CONFIG = CURRENT
EXTRACT-Timestamp_extraction_cdot = \/opt\/script_output_data\/folder1\/[\w-\d.]+\/folder2[\w\d_]+_(?<mon>\d{2})(?<date>\d{2})\d{2}(?<year>\d{2})_(?<hr>\d{2})_(?<min>\d{2})_(?<sec>\d{2})\.csv in source
HEADER_FIELD_LINE_NUMBER = 3
INDEXED_EXTRACTIONS = csv
KV_MODE = none
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = false
category = Structured
description = Some description
disabled = false
pulldown_type = true
Hi, I assume you are a using a universal forwarder. Is that correct?
If not, I'll convert my answer to a comment.
If so, you have to remove the extraction from your props.conf on your forwarder. Universal Forwarders cannot extract fields (they can only filter events). To extract fields with a forwarder you would need a heavy forwarder.
Btw, I don't see why you are specifying _TCP_ROUTING here when you only specify one indexer or indexer cluster, usually you define your group in the outputs.conf if you don't want to separate your data streams.