- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Why is Splunk not parsing a CSV file correctly with TAB as a delimiter and \n as a line separator?
Hi, I'm trying to parse csv file with TAB as separator and \n
as a line separator. I don't have time in csv, I would use file as a dictionary.
The problem is that I can't force Splunk to parse the file.
file same is:
1 12.01 45.35
2 10.01 45.35
I used these settings:
FIELD_DELIMITER=tab
FIELD_QUOTE=" (I don't have any quotes, there are only numbers)
FIELD_NAMES=id,lon,lat
And Splunk puts the whole row into field id:
{"1\t12.01\t45.35\t":"2\t10.01\t45.35\t"}
1. why does it ignores tab as separator?
2. why does it splunk adds the first line in each "event"? I have 1000 lines, splunk sees 1000 events and each event has single field "id" where the first line "1 12.01 45.35 " is always in the beginning of event.
Have no Idea what splunk tries to do...
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content


- are you sure the separator is actually a tab and not several spaces in a row? check with textmate or something like that to make sure.
- since your first field is id, Splunk what it considers to be the first field and gives it the label of "id"
- remove FIELD_QUOTE, if you dont have them, it might be that entire record is surrounded by quotes, so that might further confuse splunk
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Yes, I generate that file
- Didn't understand, I tried to rename it to XXX, no luck.
- Did it, no luck
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Things are a bit different (mostly better) in 6.0 than earlier releases, you can just do this in props.conf
:
[SourceTypeForTSVwithNoHeader]
INDEXED_EXTRACTIONS = TSV
FIELD_NAMES=id,lon,lat
If your TSV has a header, then you don't even need the FIELD_NAMES
line!
This has to be deployed to all of your Forwarders and the Splunk instances there have to be restarted before it will work.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It doesn't work. It even can't parse ',' separated file.
really-really-weird behaviour.
Here is a code to write the file:
fileWriter.append(
Arrays.asList(entry.getProperties().getCellId(),
c1.get(0),c1.get(1),
c2.get(0),c2.get(1),
c3.get(0),c3.get(1),
c4.get(0),c4.get(1)
)
.stream().map(Object::toString).collect(Collectors.joining(","))+"\r\n"
I changed '\t' to ',' doesn't help.
\n to \r\n doesn't help
Now splunk does it best to create single long row from my 10K lines file. Have no idea why it tries to do so.
Beofre that splunk did put all fields into the first one.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Leave it as a comma and use this:
[SourceTypeForTSVwithNoHeader]
INDEXED_EXTRACTIONS = CSV
FIELD_NAMES=id,lon,lat
This has to be deployed to all of your Forwarders and the Splunk instances there have to be restarted before it will work.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
it doesn't work.
So the working solution is:
fileWriter.append(
Arrays.asList(entry.getProperties().getCellId(),
c1.get(0),c1.get(1),
c2.get(0),c2.get(1),
c3.get(0),c3.get(1),
c4.get(0),c4.get(1)
)
.stream().map(Object::toString).collect(Collectors.joining(","))+"\r\n"
And extra movement: add header to the file.
Then splunk does what expected:
1. reads file line by line
2. doesn't try to put first line as a header
3. correctly splits fields not ignoring \t. When splunk did put all line into the first field it even displays in UI that '\t' are between values.
weird!
