I am importing a large CSV (esxtop output). I set the truncate limit to 0 and was able to get the data in. However I am unable to get spliunk to parse the header. manually setting the FIELDs in Props didn't work, and when i let it automatically attempt i get the following
ERROR StructuredDataHeaderExtractor - Accumulated a line of 512256 bytes while reading a structured header, giving up parsing header
Is there a way to allow it to parse a larger header line?
Sorry, are you saying you have a csv with a half million byte header? That's...just not reasonable.
Hello!
I have exactly the same problem, TRUNCATE = 0 por the new csv-long source type, but the header is still too long:
04-21-2020 10:59:55.611 +0000 ERROR StructuredDataHeaderExtractor - Accumulated a line of 540672 bytes while reading a structured header, giving up parsing header
Such a long lines CSV are generated in my case by the esxtop tool by vmware.
Any clue?
Thanks!
no divide?
Hi,
Did you try changing following setting in limits.conf:
[kv]
limit = 300
Refer following doc:
http://docs.splunk.com/Documentation/SplunkCloud/6.6.3/Data/Extractfieldsfromfileswithstructureddata...
I have the following in c/Program Files/Splunk/etc/system/local/limits.conf
[kv]
limit = 10000000
01-22-2018 12:13:01.183 -0500 ERROR StructuredDataHeaderExtractor - Accumulated a line of 589824 bytes while reading a structured header, giving up parsing header
01-22-2018 12:13:01.191 -0500 WARN CsvLineBreaker - CSV StreamId: 9780662374608211335 has extra incorrect columns in certain fields. - data_source="D:\example_data\baseline.csv", data_host="", data_sourcetype="esxtop_source"
Did you restart splunk service?
yes, i run the following between all changes
c:\Program Files\Splunk\bin>splunk.exe stop
c:\Program Files\Splunk\bin>splunk.exe clean eventdata -index esxtop
c:\Program Files\Splunk\bin>splunk.exe clean eventdata -index _thefishbucket
c:\Program Files\Splunk\bin>splunk.exe start