Splunk Search

Individual Records are merging in Single event in csv monitoring

rajim
Path Finder

I am facing a bizarre problem in csv file monitoring. I am monitoring a csv file from a server path. The records are being indexed. But sometimes, some records of the csv file is being merged into a single event. Attached is the screenshot of the config files that I have used to onboard these data.

alt text

Below is the screenshot of the output that I'm getting. First event is the combination of multiple records, which is incorrect. The next two are fine.
alt text

Could anyone please help me to resolve this so that I can have single event for each record?
Thanks in advance.

0 Karma
1 Solution

rajim
Path Finder

Thank you everybody for your comments. But I got the answer and it's working.

In my case, the monitored file is in a server where UF is installed. I am on-boarding the data using UF. So I had deployed all the configuration files in that server that is in UF. But the parsing doesn't happen at UF. It only happens at HF or Indexer. That's why in my case the props.conf and transforms.conf was not in any use.
What I needed to do is to create an app for HF with the props.conf and transforms.conf and deploy that app to HF, thru which the data are flowing to Indexer from UF server.
So I have changed my original TA and kept the inputs.conf file only in that TA and deployed that into UF server. Then I created another TA with the props.conf and transforms.conf file and deployed this TA to HF.
Now indexing happens properly.
So in short - UF TA ---> only inputs.conf
HF TA ----> props.conf and transforms.conf

View solution in original post

0 Karma

rajim
Path Finder

Thank you everybody for your comments. But I got the answer and it's working.

In my case, the monitored file is in a server where UF is installed. I am on-boarding the data using UF. So I had deployed all the configuration files in that server that is in UF. But the parsing doesn't happen at UF. It only happens at HF or Indexer. That's why in my case the props.conf and transforms.conf was not in any use.
What I needed to do is to create an app for HF with the props.conf and transforms.conf and deploy that app to HF, thru which the data are flowing to Indexer from UF server.
So I have changed my original TA and kept the inputs.conf file only in that TA and deployed that into UF server. Then I created another TA with the props.conf and transforms.conf file and deployed this TA to HF.
Now indexing happens properly.
So in short - UF TA ---> only inputs.conf
HF TA ----> props.conf and transforms.conf

0 Karma

ssadanala1
Contributor

In the props.conf specified that

SHOULD_LINEMERGE = false

needed to specify LINE_BREAKER for breaking the events properly

OR specify

INDEXED_EXTRACTIONS = CSV

INDEXED_EXTRACTIONS = < CSV|W3C|TSV|PSV|JSON>
* Tells Splunk the type of file and the extraction and/or parsing method
Splunk should use on the file.
CSV - Comma separated value format
TSV - Tab-separated value format
PSV - pipe "|" separated value format
W3C - W3C Extended Extended Log File Format
JSON - JavaScript Object Notation format
* These settings default the values of the remaining settings to the
appropriate values for these known formats.
* Defaults to unset.

0 Karma

rajim
Path Finder

I have used the SHOULD_LINEMERGE=False and LINE_BREAKER=([\r\n]+) attributes. But it's not working.

0 Karma
Get Updates on the Splunk Community!

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...