Hi all,
I'm trying to use INDEXED_EXTRACTIONS = CSV
but for some reason it's just not working. My input looks as follows
***SPLUNK*** sourcetype=csv source=index/host/query.sql
"SESSION_ID","LOGON_TIME","SCHEMA_NAME","TOTAL_SESSION_MEMORY"
"119","2014-08-22 11:04:03","SYS","813704"
and my props.conf
[csv]
DATETIME_CONFIG=NONE
INDEXED_EXTRACTIONS=CSV
TRANSFORMS-index=index-as-first-folder
None of the four fields are extracted, but the TRANSFORMS
as well as the DATETIME_CONFIG
take effect. Can anybody spot a mistake?
It's got nothing to do with the CSV, but sets the index to the name of the first folder in my source 🙂
By default, the triple-splat ***SPLUNK***
magic cookie is not enabled for logfiles, and is probably breaking the default csv handling. It's enabled for the first line for scripted inputs by default, if I recall correctly. I'm not aware if if INDEXED_EXTRACTIONS can work for scripted inputs. The design requires a certain amount of seeking around which makes it hard to do fully generic stream processing.
Ok. I tried PSV and TSV as well. No success. Opened case 187571.
Hi,
the output is actually from a script which changes the sourcetype repeatedly through the run of the script. So it's respected since the DATETIME_CONFIG
is applied (the event was earlier in sent to August, so I'm pretty sure).
Lastly, I have another scripted input which uses TSV, but in this case I don't use ***SPLUNK***
. Either INDEXED_TRANSACTIONS
is really determined based on the first line, or CSV
is broken. I may just change the delimiters to pipes or tabs and check it out later.
Best Regards.
Because the file has no header, you should use INDEXED_EXTRACTIONS = csv with the FIELD_NAMES option:
http://docs.splunk.com/Documentation/Splunk/6.2.2/Data/Extractfieldsfromfileheadersatindextime
Perhaps the magic cookie will be honored later down the line or perhaps it will be treated as an event (and you can strip it with a transform to null queue).
Would you please share the relevant transform?