Hello,
I'm having trouble importing Fortinet log CSV file delimited with double quotes and seperated by comma. The header is included in every field:
"itime=1381384446","date=2013-10-10","time=06:54:06","devid=FG800C1234501000","vd=root","type=traffic","subtype=forward"
The data is not interpreted properly, resulting in one field per line and automatically generated "interesting fields" which make no distinction between the header and value.
Every line has the same number of fields. Please help. Thanks.
Even better, use INDEXED_EXTRACTIONS=CSV
in props.conf. This should auto-detect the quotes around the events and headers and adjust accordingly. If not, there are controls in header-based index-time field extraction you can use to adjust like FIELD_QUOTE
and HEADER_FIELD_QUOTE
.
http://docs.splunk.com/Documentation/Splunk/latest/Data/Extractfieldsfromfileheadersatindextime
Note this only works in Splunk 6.
That is not the correct, de facto, format for CSV. CSV under normal circumstances should have a header per column, and subsequent records of column-specific values. It seems an odd way to represent it. You would be better off stripping the quotes and simply importing it as a raw file. Splunk will, like as not, correctly interpret your fields with no further processing.