Getting Data In

How come my Splunk Universal Forwarder and props.conf are not parsing our CSV files properly?

TitanAE
New Member

I currently have a universal forwarder and an indexer.

The universal forwarder reads a number of CSV files. And then ships them off to the indexer.

I also have a props.conf on both that reads:

[csv]
INDEXED_EXTRACTIONS = csv
HEADER_FIELD_LINE_NUMBER=9
TIMESTAMP_FIELDS=date
FIELD_DELIMITER=,

I have checked the source type (CSV), made sure that the correct header field is set, and have made sure the files can be forwarded from the forwarder to the indexer. However it is not parsing the files.

What am I missing that isn't allowing the fields to be parsed properly?

0 Karma

woodcock
Esteemed Legend

In order for this to work, you must be running an inputs.conf on your forwarder that has a stanza something like [monitor:///.../*.csv] which has below it sourcetype = csv. You need all of it working together and you need to restart the splunk instance after you update these configuration files.

0 Karma

laurie_gellatly
Communicator

Have you tried testing the upload of the CSV via the GUI?

Use your props settings to check they are doing what you expect.

0 Karma

ddrillic
Ultra Champion

I just ran on a forwarder -

    $ ./splunk cmd btool props list csv
    [csv]
    ANNOTATE_PUNCT = True
    AUTO_KV_JSON = true
    BREAK_ONLY_BEFORE = 
    BREAK_ONLY_BEFORE_DATE = True
    CHARSET = UTF-8
    DATETIME_CONFIG = /etc/datetime.xml
    HEADER_MODE = 
    INDEXED_EXTRACTIONS = csv
    KV_MODE = none
    LEARN_SOURCETYPE = true
    LINE_BREAKER_LOOKBEHIND = 100
    MAX_DAYS_AGO = 2000
    MAX_DAYS_HENCE = 2
    MAX_DIFF_SECS_AGO = 3600
    MAX_DIFF_SECS_HENCE = 604800
    MAX_EVENTS = 256
    MAX_TIMESTAMP_LOOKAHEAD = 128
    MUST_BREAK_AFTER = 
    MUST_NOT_BREAK_AFTER = 
    MUST_NOT_BREAK_BEFORE = 
    SEGMENTATION = indexing
    SEGMENTATION-all = full
    SEGMENTATION-inner = inner
    SEGMENTATION-outer = outer
    SEGMENTATION-raw = none
    SEGMENTATION-standard = standard
    SHOULD_LINEMERGE = False
    TRANSFORMS = 
    TRUNCATE = 10000
    category = Structured
    description = Comma-separated value format. Set header and other settings in "Delimited Settings"
    detect_trailing_nulls = false
    maxDist = 100
    priority = 
    pulldown_type = true
    sourcetype = 

You can run the btool command and see the combined results for the csv sourcetype or maybe create your sourcetype as my_csv, for example, and then you have no dependencies.

0 Karma

TitanAE
New Member

I'm not sure how this helps. For the most part, everything looks similar. That said i did update my props.conf file to this:

[csv_test]
SHOULD_LINEMERGE = false
INDEXED_EXTRACTIONS = csv
HEADER_FIELD_DELIMITER = ,
HEADER_FIELD_LINE_NUMBER=9
FIELD_DELIMITER=,
TZ = US/Western

My hope is that it will look at Line 9, and see that that is the header. Great!

These should define the fileds that I can search though in splunk.

0 Karma
Get Updates on the Splunk Community!

Optimize Cloud Monitoring

  TECH TALKS Optimize Cloud Monitoring Tuesday, August 13, 2024  |  11:00AM–12:00PM PST   Register to ...

What's New in Splunk Cloud Platform 9.2.2403?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.2.2403! Analysts can ...

Stay Connected: Your Guide to July and August Tech Talks, Office Hours, and Webinars!

Dive into our sizzling summer lineup for July and August Community Office Hours and Tech Talks. Scroll down to ...