We are testing Splunk if we could monitor our Avamar backup system agent job logs and see where backups are failing. Backup jobs are logged in individual log files where one file contains logs of one backup job. After certain period old log files are deleted.
So I added in inputs.conf file:
[monitor://C:\Program Files\avs\var\clientlogs\*.log]
But log files don't get logged. In Forwarders splunkd.log I have errors:
TailingProcessor - File will not be read, seekptr checksum did not match
TailingProcessor - File will not be read, is too small to match seekptr checksum
So I probaly need some other settings too in inputs.conf? And I've like to see one log file like one event in Splunk, is this possible?
OK I'm getting these log files into indexer. Is there a way to make one log file = one event?
You need to add crcSalt to the input.conf stanza:
http://docs.splunk.com/Documentation/Splunk/5.0.3/Admin/Inputsconf
crcSalt = < SOURCE >.
* Use this setting to force Splunk to consume files that have matching CRCs (cyclic redundancy checks). (Splunk only
performs CRC checks against the first few lines of a file. This behavior prevents Splunk from indexing the same
file twice, even though you may have renamed it -- as, for example, with rolling log files. However, because the
CRC is based on only the first few lines of the file, it is possible for legitimately different files to have
matching CRCs, particularly if they have identical headers.)
* If set, < string > is added to the CRC.
* If set to the literal string < SOURCE > (including the angle brackets), the full directory path to the source file
is added to the CRC. This ensures that each file being monitored has a unique CRC. When crcSalt is invoked,
it is usually set to < SOURCE >.
* Be cautious about using this attribute with rolling log files; it could lead to the log file being re-indexed
after it has rolled.
* Defaults to empty.
Yes. Adding crcSalt makes logs come into splunk indexer, thank you JSapienza.