Hello everyone, I'm a newbie, so please be gentle.
We are using Amazon Linux 2. Our configuration has a Universal Forwarder co-hosted with a Jenkins controller node. The UF is monitoring the log directories of the Jenkins host, and forwarding certain logs that match a directory traversal pattern. There are thousands of files in the log directories, but the system was working fine until...
My question is, "Has anyone else had their Splunk environment go bad recently due to a recent yum update?"
Hope someone can help,
Mike
Have you verified the UF is running? The yum update may have stopped the process and not restarted it.
Check the UF's splunkd.log file (/opt/splunkforwarder/var/log/splunkd.log) for messages that might explain why forwarding stopped.
Have you verified the UF is running? The yum update may have stopped the process and not restarted it.
Check the UF's splunkd.log file (/opt/splunkforwarder/var/log/splunkd.log) for messages that might explain why forwarding stopped.
Thank you,
Incredibly, it didn't occur to me to go look at the log 😂. Sometimes, seeing the wood for the trees passes me by!
I will see if the relevant info is still around - because we are running in AWS, hosts tend to come and go and I don't know if the log will have survived a redeployment.
Mike
I’ve seen issues like this after an upgrade where Splunk wanted us to re-accept the license, and our startup script didn’t account for that. Rich is spot-on as usual - if there was a problem starting the process, splunkd.log will describe why.