Running Splunk 6.5.0, host in question is a linux box, seeing that it's collecting _internal logs, other defined "apps" for NMON, BladeLogic, OS, etc... all working fine. But there are other apps that seemed to stop collecting data since 1/18/2018 for some reason.
I've logged into the host, became the "splunk" user and validated access to the logs in question. no permissions issues. That works.
I've check the log, and there is new data there, so that looks good too.
I've restarted the splunkforwarder, everything else collecting data, just not this particular app.
Where do I go from here? Should i look into the fish bucket (not entirely sure what to do in there anyways).
Thanks!
Joe
Looking into this "tailing process" you listed...
Yes, the app is enable. There are (3) files we're monitoring. The other 2 working fine. Definitely a strange issue. The "system_logs" are the ones that mysteriously stopped working.
INPUTS.CONF
[monitor:///var/logs/cassandra/system.log]
disabled = false
sourcetype = system_logs
index = cassandra
[monitor:///var/logs/cassandra/audit/audit.log]
disabled = false
sourcetype = audit_logs
index = cassandra
[monitor:///var/logs/cassandra/solrvalidation.log]
disabled = false
sourcetype = solrvalidation_logs
index = cassandra
Do you still see the apps with the logs that stopped collecting present on that forwarder, and are those apps/inputs enabled?
If that looks good, you can also check the TailingProcessor:FileStatus as outlined here to make sure that Splunk is actually watching those files and see whether it is reporting any problems with them.