Security

Why are Yum updates breaking a working configuration?

ChaoticMike
Explorer

Hello everyone, I'm a newbie, so please be gentle.

We are using Amazon Linux 2.  Our configuration has a Universal Forwarder co-hosted with a Jenkins controller node.  The UF is monitoring the log directories of the Jenkins host, and forwarding certain logs that match a directory traversal pattern.  There are thousands of files in the log directories, but the system was working fine until...

  • On or about 19th June 2023, the host executed one of its regular 'yum update' cron jobs, after which point all the log files stopped flowing from the host to our Heavy Forwarder.
  • We have done thorough investigation, and there are no symptoms coming from Amazon Cloudwatch that any of the hosts or networking links involved are remotely troubled by machine load.  Similarly, looking directly at netstat output doesn't imply the network is clogged.

My question is, "Has anyone else had their Splunk environment go bad recently due to a recent yum update?"

Hope someone can help,

Mike

Labels (1)
0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

Have you verified the UF is running?  The yum update may have stopped the process and not restarted it.

Check the UF's splunkd.log file (/opt/splunkforwarder/var/log/splunkd.log) for messages that might explain why forwarding stopped.

---
If this reply helps you, Karma would be appreciated.

View solution in original post

richgalloway
SplunkTrust
SplunkTrust

Have you verified the UF is running?  The yum update may have stopped the process and not restarted it.

Check the UF's splunkd.log file (/opt/splunkforwarder/var/log/splunkd.log) for messages that might explain why forwarding stopped.

---
If this reply helps you, Karma would be appreciated.

ChaoticMike
Explorer

Thank you,

Incredibly, it didn't occur to me to go look at the log 😂.  Sometimes, seeing the wood for the trees passes me by!

I will see if the relevant info is still around - because we are running in AWS, hosts tend to come and go and I don't know if the log will have survived a redeployment.

Mike

0 Karma

cklunck
Path Finder

I’ve seen issues like this after an upgrade where Splunk wanted us to re-accept the license, and our startup script didn’t account for that. Rich is spot-on as usual - if there was a problem starting the process, splunkd.log will describe why.

Get Updates on the Splunk Community!

More Ways To Control Your Costs With Archived Metrics | Register for Tech Talk

Tuesday, May 14, 2024  |  11AM PT / 2PM ET Register to Attend Join us for this Tech Talk and learn how to ...

.conf24 | Personalize your .conf experience with Learning Paths!

Personalize your .conf24 Experience Learning paths allow you to level up your skill sets and dive deeper ...

Threat Hunting Unlocked: How to Uplevel Your Threat Hunting With the PEAK Framework ...

WATCH NOWAs AI starts tackling low level alerts, it's more critical than ever to uplevel your threat hunting ...