We have an issue with a number of our UFs where in they have stopped sending internal logs after a recent app update from Deployment server.
UFs are still sending other events, there are no errors in the splunkd.log indicating why they are not sending internal logs and app changes from DS have nothing in them that should affect this (ie nothing like blacklist _internal etc).
Have attempted restarting UF, but no change.
Number of "other" events is not that high (one host is around 800 events for last 15mins) - so ruling out UF being overloaded and not sending Internal logs because of this.
Looking for any advice or troubleshooting steps I can use to try and figure out why these clients are not longer sending internal logs.
Thanks in advance.
You can run following command to find the monitor status of individual files. The files of internals are at the location $SPLUNK_HOME/var/log/splunk directory, see if you can find them and how the reading is in progress with below command.
# under $SPLUNK_HOME
./splunk list inputstatus
Goto $SPLUNK_HOME/var/log/splunk/splunkd.log and grep ERROR logs if any related to TailReader or TCPConnections etc.
An upvote would be appreciated if this reply helps and Accept solution!
can you please run following command to list all inputs monitored by UF
$SPLUNK_HOME/bin/splunk list monitor
from output, verify source like splunkd.log or metorce.log are shown in output.
all the internal logs inputs configuration is present under $SPLUNK_HOME/etc/system/default/inputs.conf
was there any changes happend to this file, ideally files under deafult should not be modified.