Hi Splunk folks, I am getting the above errors with in my _internals logs. nothing implies to me from this post https://community.splunk.com/t5/Getting-Data-In/Getting-Error-from-TailReader/m-p/356760 I have checked that file exits on host but UF is unable to read and send to indexers. I have checked permissions, the header is same for all the file, and there are plenty of resources on the host and this is the only app running on the host where UF is running. we have distributed deployment. here comes the weird part....we miss some data(files) for few days and then it works fine for few days... so when we find the files which are not being ingested to indexers from UF, we manually touch the file from backend(which change the time stamp) and then we get success. but there too many to touch and so time consuming. props.conf [salesforce_csv_input] TZ = GMT SHOULD_LINEMERGE = false TRUNCATE = 60000 pulldown_type = true INDEXED_EXTRACTIONS = csv CHECK_FOR_HEADER = true KV_MODE = none category = Structured NO_BINARY_CHECK = true FIELDALIAS..... FIELDALIAS..... FIELDALIAS.... FIELDALIAS..... FIELDALIAS..... FIELDALIAS... inputs.conf [monitor:///path/to/app/*-eventlogfile-splunk.csv] index = salesforce sourcetype = salesforce_csv_input disabled = 0 initCrcLength = 1024
... View more