Splunk Search

Index CSV Missing Columns

willadams
Contributor

I have an interesting problem that I am not sure how to solve.  I have a CSV that I am monitoring.  The CSV has approximately 232 column headings given its a big data source.  The data is being pulled in but there are some columns that are not being indexed for some reason.  For example a missing column heading is say "comp_ip". 

 

SPL search in both smart or verbose mode, doesn't show the field "comp_ip".  If I then write SPL as follows

 

 

 

index=foo sourcetype=csv | dedup comp_ip | table comp_ip

 

 

 

then SPLUNK happily shows me a table with all the values.  If I run my search in verbose mode and then look back at "events" I can see my field in the interesting fields.  However if I then revert back to normal search (i.e. index=foo sourcetype=csv) then my interesting fields no longer show this field.  I have also checked to make sure that there are no more "interesting fields" that have not been selected.

 

If I manually take the CSV file and do a manual "Add Data" and apply the sourcetype, I can see the column "comp_ip" with the relevant data.  

 

I am at a loss..

Labels (1)
Tags (1)
0 Karma
1 Solution

willadams
Contributor

Spoke with SPLUNK support and the problem was resolved by changing the limits.conf configuration for [kv] within SPLUNK Enterprise.

 

Created under system\local the file limits.conf and added the following stanza to it and restarted SPLUNK. 

 

[kv]
limit = 235

 

 

View solution in original post

0 Karma

willadams
Contributor

Spoke with SPLUNK support and the problem was resolved by changing the limits.conf configuration for [kv] within SPLUNK Enterprise.

 

Created under system\local the file limits.conf and added the following stanza to it and restarted SPLUNK. 

 

[kv]
limit = 235

 

 

0 Karma

thambisetty
SplunkTrust
SplunkTrust

you can create header using props.conf and transforms.conf 

 

props.conf

[sourcetype]
EXTRACT-fields = big_data_fields_extraction

transforms.conf

[big_data_fields_extraction]
DELIMS = ","
FIELDS = "field1","field2","fiel1254"



————————————
If this helps, give a like below.
0 Karma

willadams
Contributor

My data is already delim'd with "," and thats proven by the manual upload of the csv.  This would do additional extractions to capture those specific fields but not allow for changes to the fields over time (which is likely to happen).  How can this be done without having to props it for specific fields and be more "dynamic".  What's causing it in the first instance. 

0 Karma
Get Updates on the Splunk Community!

CX Day is Coming!

Customer Experience (CX) Day is on October 7th!! We're so excited to bring back another day full of wonderful ...

Strengthen Your Future: A Look Back at Splunk 10 Innovations and .conf25 Highlights!

The Big One: Splunk 10 is Here!  The moment many of you have been waiting for has arrived! We are thrilled to ...

Now Offering the AI Assistant Usage Dashboard in Cloud Monitoring Console

Today, we’re excited to announce the release of a brand new AI assistant usage dashboard in Cloud Monitoring ...