I have a csv file that we're getting from an ALU application that is proving incredibly difficult to work with. This csv file represents metrics collected from various pieces of the system and application all melded into a single file. Each "group" or metrics collection within the file has a differing number of fields and the fields that are there are not always represented in the same order. In general, it has seemed impossible to work with the entire file so I've had to parse it out into multiple individual csv files representing each group. However, even at the group level there are inconsistencies.
Here's a sample of the parsed csv data for one group:
The main problem I'm having is that despite having no header file, Splunk is insisting on trying to represent the first line as the header. Even if I don't attempt to use the INDEXED_EXTRACTION=CSV option and instead use default settings with manual configurations, Splunk still identifies fields improperly. As as example from the above data, there should be a field called "Application" with three resulting values - proprietary, NS, and GS. However, Splunk returns the field as "Application_Proprietary" with values of "Application=Proprietary, Application=NS, Application=GS". I have a feeling that if I can make it work on one field, I can represent that across all of the remaining fields to finally make this ingest work properly. I've tried various options inside of props.conf to try to get it to identify properly but I've had no luck. Any help would be greatly appreciated!
That looks like it might have fixed the field issue but definitely caused problems with line breaking. It's now pulling all of the individual lines into a single event. I tried removing the BREAK_ONLY_BEFORE statement and replacing it with SHOULD_LINEMERGE = false but still not breaking properly. Here's what I've got right now in props.conf: