Security

Why are Yum updates breaking a working configuration?

ChaoticMike
Explorer

Hello everyone, I'm a newbie, so please be gentle.

We are using Amazon Linux 2.  Our configuration has a Universal Forwarder co-hosted with a Jenkins controller node.  The UF is monitoring the log directories of the Jenkins host, and forwarding certain logs that match a directory traversal pattern.  There are thousands of files in the log directories, but the system was working fine until...

  • On or about 19th June 2023, the host executed one of its regular 'yum update' cron jobs, after which point all the log files stopped flowing from the host to our Heavy Forwarder.
  • We have done thorough investigation, and there are no symptoms coming from Amazon Cloudwatch that any of the hosts or networking links involved are remotely troubled by machine load.  Similarly, looking directly at netstat output doesn't imply the network is clogged.

My question is, "Has anyone else had their Splunk environment go bad recently due to a recent yum update?"

Hope someone can help,

Mike

Labels (1)
0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

Have you verified the UF is running?  The yum update may have stopped the process and not restarted it.

Check the UF's splunkd.log file (/opt/splunkforwarder/var/log/splunkd.log) for messages that might explain why forwarding stopped.

---
If this reply helps you, Karma would be appreciated.

View solution in original post

richgalloway
SplunkTrust
SplunkTrust

Have you verified the UF is running?  The yum update may have stopped the process and not restarted it.

Check the UF's splunkd.log file (/opt/splunkforwarder/var/log/splunkd.log) for messages that might explain why forwarding stopped.

---
If this reply helps you, Karma would be appreciated.

ChaoticMike
Explorer

Thank you,

Incredibly, it didn't occur to me to go look at the log 😂.  Sometimes, seeing the wood for the trees passes me by!

I will see if the relevant info is still around - because we are running in AWS, hosts tend to come and go and I don't know if the log will have survived a redeployment.

Mike

0 Karma

cklunck
Path Finder

I’ve seen issues like this after an upgrade where Splunk wanted us to re-accept the license, and our startup script didn’t account for that. Rich is spot-on as usual - if there was a problem starting the process, splunkd.log will describe why.

Get Updates on the Splunk Community!

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Wednesday, May 29, 2024  |  11AM PST / 2PM ESTRegister now and join us to learn more about how you can ...

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer Certification at ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...