Archive

How can I recognize csv with header ?

Path Finder

I used DB Connect app and get record data of Microsoft SQL Server.
The settings of output format was CSV with header.
A header line was recognized to record data too.

A intermediate output file ($SPLUNK/var/spool/dbmon/csvh-xxxxxxxxxx.dbmonevt) was outputed.

The content of this file was here.

***SPLUNK*** host=xxxxx source=xxxxx sourcetype=xxxxx index=xxxxx
Field1, Field2, Field3
"test", "test2", "test3"

.
And $SPLUNK/etc/apps/learned/local/props.conf and transforms.conf was added following.

[dbmon-sppol-1]
KV-MODE = none
REPORT-AutoHeader = AutoHeader-1
given-type = dbmon:spool

.

[AutoHeader-1]
DELIMS = ","
FIELDS = "Fields1", "Fields2", "Fields3"

I believe that no problem in this setting , but as a result of the logs that have been acquired incorrect .

0 Karma

Path Finder

Yes.
I expected that the first line of the data is recognized header fields.

the data is coming in and visible, but the fields aren't being extracted as you expect.

No.
The intermediate output file (csvh-xxxxxxxxxx.dbmonevt) is set to dbmon-spool.
But finally the sourcetype is set to my_sourcetype specified by me.

Is the sourcetype being set to dbmon-sppol-1?

The settings of DB Connect app is following.

inputs.conf

[batch://$SPLUNK_HOME\var\spool\dbmon\*.dbmonevt]
...
sourcetype = dbmon:spool

[dbmon-dump://dbname/tablename]
...
query = select * from table
...
sourcetype = my_sourcetype

props.conf

[source::...csvh_*.dbmonevt]
...
CHECK_FOR_HEADER = true
HEADER_MODE = firstline

.
1. Execute SQL query and export spool output file. (This file's sourcetype is dbmon-spool)
2. Import spool output file. (The spool file's sourcetype is set to dbmon-spool.)
3. The spool file's first line starts splunk header. (SPLUNK host=xxxxx source=xxxx sourcetype=xxxx)
4. The settings of props.conf set to HEADER_MODE = firstline.
The first line is treated splunk header. (This contains original host, source, sourcetype, ...etc)
5. And the settings of props.conf set to CHECK_FOR_HEADER = true.
The second line is treated CSV header.
6. Following the settings CHECK_FOR_HEADER, add field information to $SPLUNK/etc/apps/learned/local/props.conf.

I think when the data is indexed, the field is to be set according to the field information that was added automatically.

The first line of the CSV data despite being extracted as AutoHeader,
the indexed data actually is identified as a record, a field has not been assigned to each data.

I think because AutoHeader is extracted, setting CHECK_FOR_HEADER is correctly running.
I would be wrong or what ?

0 Karma

Splunk Employee
Splunk Employee

Hi, it's not clear what the problem is, but since you're talking about your fields I'll assume that the data is coming in and visible, but the fields aren't being extracted as you expect. Is the sourcetype being set to dbmon-sppol-1? That extraction won't take effect if it isn't.

0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!