Getting Data In

Not all CSV fields getting extracted

a212830
Champion

Hi,

i have a csv feed with about 700 fields, and it looks like splunk is only auto-detecting about 100 one them. What's very strange is it seems to stop extracting them in the middle, but then the ones at the end get extracted.

For example, at the beginning, I have a number of fields - pkt_drop_percent, wire_mbits_per_sec.realtime, alerts_per_second... and then usr[0], idle[0], sys[0]... all the way up to usr[71], idle[71],sys[71]. It creates usr/idle/sys 0-24, but then skips from 25-71. But, then all the fields after usr[71],idle[71],sys[71] do get created. So, it's skipping from 25-71. Anyone ever run into this?

Here are my props settings from the HFW:

[sensor_info]
PREAMBLE_REGEX = ^#####################.*
ANNOTATE_PUNCT=false
MAX_TIMESTAMP_LOOKAHEAD = 35
SHOULD_LINEMERGE = false
LINE_BREAKER = ([\r\n]+)
TIME_PREFIX=^
HEADER_FIELD_LINE_NUMBER = 2
FIELD_DELIMITER = ,

Finally, are these considered INDEXED_EXTRACTIONS?

0 Karma
1 Solution

a212830
Champion

Never mind. Create a transforms on the SH and mapped them that way.

View solution in original post

a212830
Champion

Never mind. Create a transforms on the SH and mapped them that way.

Get Updates on the Splunk Community!

Purpose in Action: How Splunk Is Helping Power an Inclusive Future for All

At Cisco, purpose isn’t a tagline—it’s a commitment. Cisco’s FY25 Purpose Report outlines how the company is ...

[Upcoming Webinar] Demo Day: Transforming IT Operations with Splunk

Join us for a live Demo Day at the Cisco Store on January 21st 10:00am - 11:00am PST In the fast-paced world ...

New Year. New Skills. New Course Releases from Splunk Education

A new year often inspires reflection—and reinvention. Whether your goals include strengthening your security ...