Security

Why are Yum updates breaking a working configuration?

ChaoticMike
Explorer

Hello everyone, I'm a newbie, so please be gentle.

We are using Amazon Linux 2.  Our configuration has a Universal Forwarder co-hosted with a Jenkins controller node.  The UF is monitoring the log directories of the Jenkins host, and forwarding certain logs that match a directory traversal pattern.  There are thousands of files in the log directories, but the system was working fine until...

  • On or about 19th June 2023, the host executed one of its regular 'yum update' cron jobs, after which point all the log files stopped flowing from the host to our Heavy Forwarder.
  • We have done thorough investigation, and there are no symptoms coming from Amazon Cloudwatch that any of the hosts or networking links involved are remotely troubled by machine load.  Similarly, looking directly at netstat output doesn't imply the network is clogged.

My question is, "Has anyone else had their Splunk environment go bad recently due to a recent yum update?"

Hope someone can help,

Mike

Labels (1)
0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

Have you verified the UF is running?  The yum update may have stopped the process and not restarted it.

Check the UF's splunkd.log file (/opt/splunkforwarder/var/log/splunkd.log) for messages that might explain why forwarding stopped.

---
If this reply helps you, Karma would be appreciated.

View solution in original post

richgalloway
SplunkTrust
SplunkTrust

Have you verified the UF is running?  The yum update may have stopped the process and not restarted it.

Check the UF's splunkd.log file (/opt/splunkforwarder/var/log/splunkd.log) for messages that might explain why forwarding stopped.

---
If this reply helps you, Karma would be appreciated.

ChaoticMike
Explorer

Thank you,

Incredibly, it didn't occur to me to go look at the log 😂.  Sometimes, seeing the wood for the trees passes me by!

I will see if the relevant info is still around - because we are running in AWS, hosts tend to come and go and I don't know if the log will have survived a redeployment.

Mike

0 Karma

cklunck
Path Finder

I’ve seen issues like this after an upgrade where Splunk wanted us to re-accept the license, and our startup script didn’t account for that. Rich is spot-on as usual - if there was a problem starting the process, splunkd.log will describe why.

Get Updates on the Splunk Community!

Join Us for Splunk University and Get Your Bootcamp Game On!

If you know, you know! Splunk University is the vibe this summer so register today for bootcamps galore ...

.conf24 | Learning Tracks for Security, Observability, Platform, and Developers!

.conf24 is taking place at The Venetian in Las Vegas from June 11 - 14. Continue reading to learn about the ...

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...