Getting Data In
Highlighted

How to enable audit logs in AIX (to record any administrative changes) and send it to splunk?

New Member

Currently we're able to get both syslog & audit logs - Linux:audit (sourcetype) logs from LINUX servers onto splunk platform. Alos, able to get syslogs from AIX servers onto splunk platform. However, not able to get the audit logs (administrative changes) from AIX servers onto splunk. Kindly advise what confiurations needed to be done at both splunk end as well as AIX server end?

Tags (2)
0 Karma
Highlighted

Re: How to enable audit logs in AIX (to record any administrative changes) and send it to splunk?

Contributor
Highlighted

Re: How to enable audit logs in AIX (to record any administrative changes) and send it to splunk?

New Member

I've already checked this link as well other information currently available here before putting my question. Kindly help and note that only people who have actually done the AIX syslog & audit log integration with splunk will be able to answer this.

0 Karma
Highlighted

Re: How to enable audit logs in AIX (to record any administrative changes) and send it to splunk?

SplunkTrust
SplunkTrust

EDIT 2018-04-07 updated the script as there was a small logic error where logs were not sent in between midnight and the first run during the morning of the script.

I'd appreciate if you accepted or up-voted this answer if appropriate as it does take quite a bit of time to write a detailed response!

So there are multiple pieces to implementing this solution

  • AIX configuration, in my example only object/file system auditing was allowed
  • The user running splunk was added to the audit and security user groups
  • Permissions on the file /etc/security/audit/objects was opened up to allow the group to write to it (which allows the Splunk process to write to the file).
  • Due to minimal write activity I need to restart the audit daemon to obtain the data I needed (expecting approx 1000 or less writes to files every few days)
  • The script is run at least once a day if not more often

I created multiple scripts for my solution, from a Splunk point of view there is an inputs.conf file on each server with monitoring:

[script://./bin/runAuditd.sh]
index = main
sourcetype = auditd:aix
interval = 3600

This is the wrapper script, runAuditd.sh which I customise per server to monitor certain directories, runAuditd.sh:

#!/bin/sh
/opt/splunkforwarder/etc/apps/AIXauditd/bin/auditUpdater.sh /tmp true
/opt/splunkforwarder/etc/apps/AIXauditd/bin/auditUpdater.sh /etc false
/opt/splunkforwarder/etc/apps/AIXauditd/bin/auditRunner.sh

Now the above application is deployed to each server I want to monitor, the AIXauditd is the app that I deploy to all the servers I wish to audit as it contains the common script I'm using.

Now in terms of the main part of the AIX auditing, this script exists to update the list of files I'm watching for write changes to:
auditUpdater.sh:

#!/bin/sh
if [ $# != 2 ]; then
  echo "2 arguments required the file with the name of the directory to monitor, and the 2nd argument is 'false' if you don't want the audit objects file overwritten (i.e. you want it appended to...)..."
  exit 0
fi

#symlinks break the audit monitoring it can monitor files/directories only
if [ $2 != "false" ]; then
  #Update the audit list with current file list
  find $1 ! -type l | awk '{printf("%s:\n\tw = Obj_WRITE\n\n",$1)}' > /etc/security/audit/objects
else
  find $1 ! -type l | awk '{printf("%s:\n\tw = Obj_WRITE\n\n",$1)}' >> /etc/security/audit/objects
fi

Now in my case I only cared about files been written to so the above works perfectly for me.
For those unfamiliar with AIX auditing, the filesystem writes work until 1 level below the current directory, so if we're monitoring /tmp/
Any files/directories written inside /tmp/ are monitored, any files/directories created in /tmp/subdir/... are not monitored, so the above script literally adds every file so you see exactly the file that changed...

Now I also created the auditRunner.sh script which exists to obtain the data from the audit daemon:

#!/bin/sh
#Restart to flush the output
audit shutdown > /tmp/auditRunnerDebugOutput.txt 2>&1; audit start

if [ ! -f /home/splunk/var/auditCheckerLastRun.txt ]; then
  mkdir -p /home/splunk/var/
  echo "date >= 01/01/18" > /tmp/auditDatePickTemp
else
  #Our start time is based on the last run time...
  lastTime=`cat /home/splunk/var/auditCheckerLastRun.txt`
  dateData=`perl -MPOSIX=strftime -e 'print strftime("%m/%d/%y %T", localtime($ARGV[0])), "\n"' $lastTime`

  #AIX has a nice feature that if you do "time >= 23:23:44 && date >= 04/05/18"
  #and assuming it's now the 6th April at 00:30, you will not see audit events between 00:00 and 00:30 because the time should be >= 23:23...
  #so the solution is to do "time >= 23:23:44 && date >= 04/05/18 || date >= 04/06/18" or similar...
  #This complicates the code but it is a requirement to make the solution work as expected
  curDateData=`perl -MPOSIX=strftime -e 'print strftime("%m/%d/%y %T", localtime()), "\n"'`
  curDateDataDay=`echo $curDateData | cut -d "/" -f2`
  prevDataDataDay=`echo $dateData | cut -d "/" -f2`

  if [ "$curDateDataDay" = "$prevDataDataDay" ]; then
      echo "time >= `echo $dateData | cut -d ' ' -f2` && date >=" `echo $dateData | cut -d ' ' -f1` > /tmp/auditDatePickTemp
  else
      #Now we need a special or condition to handle the before the next run but after 11PM situation
      #So add a >= today's date to ensure we pickup records after midnight but before the current run completes...
      echo "time >= `echo $dateData | cut -d ' ' -f2` && date >=" `echo $dateData | cut -d ' ' -f1` "|| date >=" `echo $curDateData| cut -d ' ' -f1` > /tmp/auditDatePickTemp
  fi
fi

date +%s > /home/splunk/var/auditCheckerLastRun.txt

#Get all the details but remove the default output of:
#event           real     login    status      time                     command                         process  parent
#--------------- -------- -------- ----------- ------------------------ ------------------------------- -------- --------
#Note real events look like:
#Obj_WRITE       realuser    loginname OK          Thu Feb 15 15:29:36 2018 vi                              17105006 27263202
#        audit object write event detected /tmp/build.txt
/usr/sbin/auditselect -f /tmp/auditDatePickTemp /var/log/audit/trail  | /usr/sbin/auditpr -v -t1 -herlRtcpP | grep -vE "(\--------------- -------- -------- -----------)|event +real +login +status"

Now using all the above I can obtain AIX audit data on a per-server basis on configured directories.
This may or may not be the best approach for your solution, one item I would like to highlight is that I am running:

audit shutdown > /tmp/auditRunnerDebugOutput.txt 2>&1; audit start

Now the reason I do this is that the expected number of writes to the real files/directories I'm monitoring are minimal, they only change when an application is re-deployed into production.

0 Karma