We had an EC2 instance become inaccessible via the AWS Session Manager.
Root cause was the main volume filling-up with various splunkfowarder-x.x.x RPM files in /usr/bin/
Yesterday the filesystem was cleaned-up, but today there's another copy of that RPM in the /usr/bin/ directory.
Does anyone know why is this happening ?
Most probably some automatic tool keeps downloading said package files onto your machine. What is it and why it does that - I have no idea. Did you check who owns those files? Splunk Enterprise or Universal Forwarder unless hurt very badly by some misadministration don't touch /usr/bin on their own.
There was an automation in backend within AWS AMI to install older version.
That was the issue and we are able to update the backend code.
Thank you for the response
Splunk does nothing with the /usr/bin directory (or anything outside of $SPLUNK_HOME and $SPLUNK_DB, for that matter*) so something other than Splunk is putting the files there.
It might be a good idea to use Splunk to monitor disk space and send an alert when it becomes critically low.
* Scripts configured to run in Splunk can touch any files or directories with the right permissions, of course. It's not Best Practice, but is done in some sites. You may have a script running (in Splunk or not) that is trying to refresh the UF
Most probably some automatic tool keeps downloading said package files onto your machine. What is it and why it does that - I have no idea. Did you check who owns those files? Splunk Enterprise or Universal Forwarder unless hurt very badly by some misadministration don't touch /usr/bin on their own.