I have a log file that I need to have the splunkforwarder re-start from the very beginning.
my index.conf entry is this:
However I keep getting this message in the splunkd.log
04-27-2012 10:15:27.053 -0700 INFO WatchedFile - Will begin reading at offset=1361969172 for file='/var/log/app/prod/hostname0050.log'.
I would like it to re-read the entire file to get the past history.
In my case, I had access to splunk, but am not able to touch the log files. I'm using the universal forwarder (4.3) and
splunk clean eventdata -index _fishbucket
ERROR: Cleaning eventdata is not supported on this version.
so I took a wild guess and this appears to have done the trick.
rm -rf /opt/splunkforwarder/var/lib/splunk/fishbucket
And yes, I'm just setting this up, so I'm not concerned about losing any splunk data.
You have several methods :
Recommended : reindex just one file : change the crc of the file.
edit the file, add a first line, by example a comment." # splunk reindex".
The tailing processor will compare the CRC of the first 256 chars of the file with the list he maintains, and will detect the file as a new one, and index it.
variant : if you are already using the option crcSalt=
Big guns : reset the forwarder for all logs, blow the fishbucket index that contain the position for each monitored files. Beware all will be reindexed.
./splunk clean eventdata -index _fishbucket
You could clean the fishbucket on the fowarder. That will cause to forwarder to start all over on it's inputs.
Check out : http://wiki.splunk.com/Community:HowSplunkReadsInputFiles and http://blogs.splunk.com/2008/08/14/what-is-this-fishbucket-thing/
Yes, the app index has a whole lot of other application data.
This is just a one time re-index of the single file, once it reads it I was going to change it to just tail the file from that point on.
Is it a one time need to re-index the file or is it going to continually monitor it? I assume your 'app' index has other data and therefore we can't just clean the index and re-index the file?