Knowledge Management

How can I recognize csv with header ?

ts_splunk
Path Finder

I used DB Connect app and get record data of Microsoft SQL Server.
The settings of output format was CSV with header.
A header line was recognized to record data too.

A intermediate output file ($SPLUNK/var/spool/dbmon/csvh-xxxxxxxxxx.dbmonevt) was outputed.

The content of this file was here.

***SPLUNK*** host=xxxxx source=xxxxx sourcetype=xxxxx index=xxxxx
Field1, Field2, Field3
"test", "test2", "test3"

.
And $SPLUNK/etc/apps/learned/local/props.conf and transforms.conf was added following.

[dbmon-sppol-1]
KV-MODE = none
REPORT-AutoHeader = AutoHeader-1
given-type = dbmon:spool

.

[AutoHeader-1]
DELIMS = ","
FIELDS = "Fields1", "Fields2", "Fields3"

I believe that no problem in this setting , but as a result of the logs that have been acquired incorrect .

0 Karma

ts_splunk
Path Finder

Yes.
I expected that the first line of the data is recognized header fields.

the data is coming in and visible, but the fields aren't being extracted as you expect.

No.
The intermediate output file (csvh-xxxxxxxxxx.dbmonevt) is set to dbmon-spool.
But finally the sourcetype is set to my_sourcetype specified by me.

Is the sourcetype being set to dbmon-sppol-1?

The settings of DB Connect app is following.

inputs.conf

[batch://$SPLUNK_HOME\var\spool\dbmon\*.dbmonevt]
...
sourcetype = dbmon:spool

[dbmon-dump://dbname/tablename]
...
query = select * from table
...
sourcetype = my_sourcetype

props.conf

[source::...csvh_*.dbmonevt]
...
CHECK_FOR_HEADER = true
HEADER_MODE = firstline

.
1. Execute SQL query and export spool output file. (This file's sourcetype is dbmon-spool)
2. Import spool output file. (The spool file's sourcetype is set to dbmon-spool.)
3. The spool file's first line starts splunk header. (SPLUNK host=xxxxx source=xxxx sourcetype=xxxx)
4. The settings of props.conf set to HEADER_MODE = firstline.
The first line is treated splunk header. (This contains original host, source, sourcetype, ...etc)
5. And the settings of props.conf set to CHECK_FOR_HEADER = true.
The second line is treated CSV header.
6. Following the settings CHECK_FOR_HEADER, add field information to $SPLUNK/etc/apps/learned/local/props.conf.

I think when the data is indexed, the field is to be set according to the field information that was added automatically.

The first line of the CSV data despite being extracted as AutoHeader,
the indexed data actually is identified as a record, a field has not been assigned to each data.

I think because AutoHeader is extracted, setting CHECK_FOR_HEADER is correctly running.
I would be wrong or what ?

0 Karma

jcoates_splunk
Splunk Employee
Splunk Employee

Hi, it's not clear what the problem is, but since you're talking about your fields I'll assume that the data is coming in and visible, but the fields aren't being extracted as you expect. Is the sourcetype being set to dbmon-sppol-1? That extraction won't take effect if it isn't.

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Why Splunk Customers Should Attend Cisco Live 2026 Las Vegas

Why Splunk Customers Should Attend Cisco Live 2026 Las Vegas     Cisco Live 2026 is almost here, and this ...

What Is the Name of the USB Key Inserted by Bob Smith? (BOTS Hint, Not the Answer)

Hello Splunkers,   So you searched, “what is the name of the usb key inserted by bob smith?”  Not gonna lie… ...

Automating Threat Operations and Threat Hunting with Recorded Future

    Automating Threat Operations and Threat Hunting with Recorded Future June 29, 2026 | Register   Is your ...