I have a Splunk indexer which hasn't been indexing logs from the past 3-4 days. I'm trying to troubleshoot and have gone through the usual checklist of items that I found by researching splunkbase. The most common reason, of course is disk being full. I have over 50% of the disk free. Second, I haven't configured my indexer as a forwarder. All the logs that I'm indexing are on the same box as the indexer. After reviewing splunkd.log, these are the only two things that stood out-
02-04-2012 10:58:48.643 WARN DateParserVerbose - The TIME_FORMAT specified is matching timestamps (Mon Oct 29 09:24:24 2012) outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE.
So just to debug the issue, in props.conf, I set:
MAX_DAYS_HENCE=2000
MAX_DAYS_AGO=10951
(And restarted Splunk) because I thought Splunk was trying to index logs in the future.
That didn't work either.
This the other error message:
02-06-2012 05:11:34.353 INFO TailingProcessor - Could not send data to output queue (parsingQueue), retrying...
02-06-2012 05:11:34.353 INFO TailingProcessor - ...continuing.
Could someone please tell me firstly, does the DateParserVerbose Warning have anything to do with Splunk not indexing data AT ALL? Secondly, please tell me how I can resolve this.
Any help will be appreciated. Thank you.
Have you tried searching for "All Time" in the drop down selector?
Can you post a snippet of the log format so we can get the props.conf set correctly if that's the case..
Brian
This was my bad. I apologize, this is a syslog issue. Thanks so much for your help.