I have a monitor on a log file that is continuously written to. It seems that the monitor keeps stopping and therefore I get no logs in the index. I can restart Splunk and logs come in for a bit, but then stop again. I can see this in the splunkd.log:
04-13-2016 14:19:37.331 +0000 INFO BatchReader - Removed from queue file='/var/log/akcloudmon/node1_cloudmon1.log'. 04-13-2016 14:19:47.239 +0000 INFO BatchReader - Removed from queue file='/var/log/akcloudmon/node1_cloudmon.log'.
Here is the input stanza:
[monitor:///var/log/akcloudmon/node*_cloudmon.log sourcetype=akamai source=akamai index=akamai disabled=0 crcSalt=akamaisalt2
How can I fix this?
Just curious if your input stanza has a "trailing/ending" square bracket ']'? I"m sure it does, but figured I'd ask just in case.
As to your solution...
Try changing your crcSalt to
<SOURCE> instead, the literal strings are intended for singleshots (i believe).
[monitor:///var/log/akcloudmon/node*_cloudmon.log] sourcetype=akamai source=akamai index=akamai disabled=0 crcSalt=<SOURCE>
Actually we did that, as a first step. No dice.
We also tried a few other things but to no avail.
are they rolling log files with the same name? If so how often do they roll?
How many files are there in the directory with the file that you are monitoring?
Heres what we did to fix this issue. We upgraded the UF to an HF. We have decided to do this for all UFs that are LARGE data ingestion hosts. This has resolved any issues we were having. We also added:
minbatchsize_bytes = 10737418240
to limits.conf on these hosts as well.