I have a bash script that queries audit.log using ausearch for events that I have configured in audit.rules to have attached a specific key.
This is the general idea of the script:
# Assign path variables
# Capture saved timestamp from last execution
# Save new timestamp for future execution
# Execute query using ausearch
# Redirect stdout and stderr to two different variables
# Check stderr variable does not equal "<no matches>" and exit execution if true
Now this is where I have tried multiple things and while all of them work when executed from a terminal, they don't generate any results when Splunk executes them.
echo $stdout_var
OR
echo $stdout_var > /path/to/tmp
cat /path/to/tmp
I have even tried monitoring "/path/to/tmp", that's when I realized this might be a user permissions issue since the file is generated, but there is never any content in it.
Currently, SPLUNK_OS_USER=root, but does that mean that the script is executed as SPLUNK_OS_USER? Or do I have to configure the script through Splunk to run as a specific user?
Again, when I execute this command manually from the CLI as root, it works exactly as expected, but it generates nothing when executed through the scripted input.
EDIT:
So I continue to debug to find the issue.
1. Script is being executed as root (placed "echo $UID" at top of script, which showed on Splunk Web as an event that simply returned 0)
2. I have added "echo" commands at every step of the execution, and I have found that it actually keeps exiting execution at the stderr variable check. This makes no sense, because when I run exactly the same command with the same timestamp, on the command line it works as expected, but apparently when it is executed by Splunk as a scripted input, ausearch returns nothing.
I know this is starting look like a bash script question, but from a Linux standpoint, the script works as it should. I don't know what else to do at this point to make it work through Splunk.
After debugging as much as I could, I decided to change the way the data is processed:
After debugging as much as I could, I decided to change the way the data is processed:
Did you ever circle back to this issue and discover the actual cause? I have something similar where a script that runs via CLI as root or user splunk works as expected, but when executed via Splunk application, the behavior is different. I can workaround by outputting to a logfile and then using Splunk to ingest the content.
This is a very complicated topic but your best bet is to leverage this app:
https://splunkbase.splunk.com/app/2642/
Be sure to understand why many (most) people opt to use something like rlog.sh
that is packaged here:
https://splunkbase.splunk.com/app/833/
See here for some explanation:
https://answers.splunk.com/answers/311061/why-does-splunk-ta-nix-rlogsh-cause-huge-amount-of.html
I am actually creating my own custom app for our deployments of *nix because the previous admins used these apps you're mentioning and they consumed too much data, much more than what was really required. After going through the scripts within these apps and reviewing the requirements that our teams have for Splunk, I decided to not use them and use my experience as an Linux admin to create scripts that would generate data that our teams actually wanted.