Getting Data In

Why is Splunk not parsing a CSV file correctly with TAB as a delimiter and \n as a line separator?

seregaserega
Explorer

Hi, I'm trying to parse csv file with TAB as separator and \n as a line separator. I don't have time in csv, I would use file as a dictionary.
The problem is that I can't force Splunk to parse the file.
file same is:

1   12.01 45.35 
2   10.01   45.35       

I used these settings:

FIELD_DELIMITER=tab
FIELD_QUOTE=" (I don't have any quotes, there are only numbers)
FIELD_NAMES=id,lon,lat

And Splunk puts the whole row into field id:

{"1\t12.01\t45.35\t":"2\t10.01\t45.35\t"}
1. why does it ignores tab as separator?
2. why does it splunk adds the first line in each "event"? I have 1000 lines, splunk sees 1000 events and each event has single field "id" where the first line "1 12.01 45.35 " is always in the beginning of event.

Have no Idea what splunk tries to do...

Tags (3)
0 Karma

mreynov_splunk
Splunk Employee
Splunk Employee
  1. are you sure the separator is actually a tab and not several spaces in a row? check with textmate or something like that to make sure.
  2. since your first field is id, Splunk what it considers to be the first field and gives it the label of "id"
  3. remove FIELD_QUOTE, if you dont have them, it might be that entire record is surrounded by quotes, so that might further confuse splunk

seregaserega
Explorer
  1. Yes, I generate that file
  2. Didn't understand, I tried to rename it to XXX, no luck.
  3. Did it, no luck
0 Karma

woodcock
Esteemed Legend

Things are a bit different (mostly better) in 6.0 than earlier releases, you can just do this in props.conf:

[SourceTypeForTSVwithNoHeader]
INDEXED_EXTRACTIONS = TSV
FIELD_NAMES=id,lon,lat

If your TSV has a header, then you don't even need the FIELD_NAMES line!

This has to be deployed to all of your Forwarders and the Splunk instances there have to be restarted before it will work.

seregaserega
Explorer

It doesn't work. It even can't parse ',' separated file.
really-really-weird behaviour.

Here is a code to write the file:

fileWriter.append(
                    Arrays.asList(entry.getProperties().getCellId(),
                            c1.get(0),c1.get(1),
                            c2.get(0),c2.get(1),
                            c3.get(0),c3.get(1),
                            c4.get(0),c4.get(1)
                            )
                            .stream().map(Object::toString).collect(Collectors.joining(","))+"\r\n"

I changed '\t' to ',' doesn't help.
\n to \r\n doesn't help

Now splunk does it best to create single long row from my 10K lines file. Have no idea why it tries to do so.
Beofre that splunk did put all fields into the first one.

0 Karma

woodcock
Esteemed Legend

Leave it as a comma and use this:

[SourceTypeForTSVwithNoHeader]
INDEXED_EXTRACTIONS = CSV
FIELD_NAMES=id,lon,lat

This has to be deployed to all of your Forwarders and the Splunk instances there have to be restarted before it will work.

0 Karma

seregaserega
Explorer

it doesn't work.
So the working solution is:

  fileWriter.append(
                    Arrays.asList(entry.getProperties().getCellId(),
                            c1.get(0),c1.get(1),
                            c2.get(0),c2.get(1),
                            c3.get(0),c3.get(1),
                            c4.get(0),c4.get(1)
                            )
                            .stream().map(Object::toString).collect(Collectors.joining(","))+"\r\n"

And extra movement: add header to the file.
Then splunk does what expected:
1. reads file line by line
2. doesn't try to put first line as a header
3. correctly splits fields not ignoring \t. When splunk did put all line into the first field it even displays in UI that '\t' are between values.

weird!

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...