I have some csv files that have 30+ columns and I cannot get splunk to ingest them. I keep getting crc errors. I've tried to use crcSalt and initCrcLength but I keep getting the same error message below in splunkd.log. The files can be very similar sometimes. Anyone have any ideas?
05-23-2017 16:15:43.313 -0400 ERROR TailReader - File will not be read, seekptr checksum did not match (file=*****\Public\Test\test.csv). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info.
[batch://\****\Public\Test\*.csv] crcSalt = <SOURCE> move_policy = sinkhole sourcetype = pub_audit index = main DATETIME_CONFIG = CURRENT
Do as @somesoni2 says and clean up the garbage, deploy this
inputs.conf file to your forwarder, restart all splunk instances on that forwarder, make sure that the files are disappearing (if not, then they are not being forwarded). MAKE SURE that each file has a different name that is never, ever, ever, ever recycled (that is what the
crcSalt = <SOURCE> line does).
Also, your quadruple-asterisks is quite strange; why is it that way? Are you missing backslash characters between them? Are you trying to match a file that literally has a string of asterisks for a directory name? Are you trying to match directories that are 4-characters long?
Give this a try
[batch://\****\Public\Test\*.csv] crcSalt = <SOURCE> initCrcLength = 4999 move_policy = sinkhole sourcetype = pub_audit index = main
The DATETIME_CONFIG attribute is a props.conf property, not inputs.conf.