I'm using the most recent version of Splunk Light Forwarder to forward .csv files to my main Splunk server (4.2, build 96430). There are 30 files on each of 4 servers, and the files are updated with a few rows every minute.
I have noticed that after the services run for a few days, these csv files stop being indexed around the same time. In the most recent incident, the point where the files stop being indexed is around 26,000 rows when the files are 8.5MB in size. If there is a server that is not active and doesn't have as much data, it does not appear to be affected.
Looking at the main server splunkd.log shows something very weird - at a certain point Splunk decides that the timestamps, which are from less than a minute ago, are "outside of the acceptable time window":
12-07-2011 21:50:05.776 -0500 WARN DateParserVerbose - A possible timestamp match (Wed Dec 07 21:49:03 2011) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context="source::\servername\logs\timings16_0.csv|host::servername|summary_timings|remoteport::53957"
12-07-2011 21:50:05.776 -0500 WARN DateParserVerbose - A possible timestamp match (Wed Dec 07 21:49:42 2011) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context="source::\servername\logs\timings16_0.csv|host::servername|summary_timings|remoteport::53957"
and then shows thousands of "similar messages suppressed".
I have the following in props.conf, but I don't understand why Splunk suddenly decides these timestamps are out of range, when they clearly are not. And why it only seems to do this after the files have reached a certain size.
MAX_DAYS_HENCE = 2
MAX_DIFF_SECS_AGO = 999999
If I restart services so that new .csv files are generated, they begin being indexed again.
Any idea what's going on here?
... View more