Getting Data In

sending AIX audit log stream to splunk

jfraiberg
Communicator

I was having major issues getting splunk to work with the "text" file that we are pushing all AIX commands to.

Originally we were doing something like the following to pipe the commands to a text file -

/usr/sbin/auditstream |/usr/sbin/auditselect -e "event == PROC_Execute " | auditpr -v >/var/audit/stream.out &

Unfortunately, while the stream.out says it is txt it still seems to have some characters that were causing major issues.

Tags (4)

taylorl2
New Member

IBM has just told me NOT to pipe to logger!
You can refer to this technote for more info http://www-01.ibm.com/support/docview.wss?uid=isg3T1011847
NOTE: IBM recommends audit output NOT be piped directly to 'logger' due to limitations with the 'logger' utility which may result in errors and/or problems with the audit processes
In case the technote is moved:

Redirecting audit log output to the syslog file
Question
Can the audit log(s) from a server be configured to be sent to a centralized repository server?

Cause
Many customers send various system logs to a centralized server for monitoring purposes. syslog has the capability to send its logs to a centralized server. The syslog 'logger' command can be utilized by by the audit subsystem via a local script to transfer audit logs to the syslog output file.
NOTE: IBM recommends audit output NOT be piped directly to 'logger' due to limitations with the 'logger' utility which may result in errors and/or problems with the audit processes.

Answer
The following steps will pipe stream mode auditing output to the /usr/bin/logger command which will write the output to the specified syslog file.

1- Update /etc/security/audit/config file to use stream mode...

 start: 
 binmode = off 
 streammode = on 

2- Modify /etc/security/audit/streamcmds file to include the following two entries...

 # cat streamcmds 
 /usr/sbin/auditstream | auditpr -h eclrRti -v > /audit/stream.out &
 /etc/security/audit/send_to_logger 1>/dev/console 2>&1 &

This will cause audit to execute two separate commands when audit is started.

The first line, with auditstream, writes the audit stream output to a local file called /audit/stream.out. This is the suggested, reliable method to record audit entries.

For the second line, create the script to be executed upon audit start.
In the example, I use the file name /etc/security/audit/send_to_logger.

3- Edit the file as shown below, making any modifications you might like:

--START of file--

 #!/usr/bin/ksh

 count=0
 time1=0
 time2=0
 diff=0

 touch /audit/stream.out # make sure this file exists before calling tail

 while true
 do
 # use tail to pipe stream.out continuously to logger
 tail -f /audit/stream.out | /usr/bin/logger -r 400 -p local0.debug

 # if logger has died and we end up here, kill 'tail' if it still is alive
 # grep for $$ to make sure we only kill tail that has this script's PPID
 kill `ps -f | grep "/usr/bin/tail -f /audit/stream.out" | grep $$ | grep -v grep | awk '{print $2}'`

 # keep track of how many times logger has died and we've gone through the loop
 count=$((count+1))
 if [[ $count -eq 1 ]]; then
 time1=`date +"%s"` # record the first time logger has died
 fi

 if [[ $count -gt 1 ]]; then
 time2=`date +"%s"`
 diff=$((time2-time1))
 if [[ $diff -gt 60 ]]; then
 count=0 # reset count if more than 60s elapsed since last time
 elif [[ $diff -le 60 && $count -ge 10 ]]; then
 # logger has died 10 times in less than 60 seconds, something is going wrong
 # make it sleep so that it doesn't keep looping and eating CPU
 sleep 120
 # you might send mail to root here in order to alert that
 # logger has been failing many times
 # reset the count and try again
 count=0
 fi
 fi
 done
 --END of file--

Make this file executable (555 permissions).

# chmod 555 /etc/security/audit/send_to_logger

The script will first create /audit/stream.out, if it does not yet exist. It will then use 'tail -f' to constantly read the contents of /audit/stream.out as records are added, and pipe that to the logger command. If logger dies, the infinite 'while' loop will restart it. The script contains code to take care of an instance where logger dies repeatedly - for example, if syslogd is shutdown - so that it will not loop continuously in a situation like that.

When an 'audit shutdown' is done, the script will continue running - there is nothing that will signal it to die when auditing is shut down. You will need to kill it manually when you shut down auditing, as well as its 'tail' and 'logger' child processes.

4- Ensure /etc/syslog.conf contains a local0.debug entry and points to a log file...

local0.debug /tmp/syslog.out 

5- Ensure the syslog output file exists...

# touch /tmp/syslog.out 

6- Refresh syslogd...

# refresh -s syslogd 

7- Start auditing...

# audit start 

8- Perform some operation that will generate an audit log entry (depending on what events/objects you are auditing)

9- Stop auditing...

# audit shutdown 

10- Verify audit entries are written to the syslog.out file, eg...

 # cat /tmp/syslog.out 
 Aug 18 10:29:24 myhost syslog:info syslogd: restart 
 Aug 18 10:29:53 myhost local0:debug joe: event login status time command
 Aug 18 10:29:53 myhost local0:debug joe: --------------- -------- ----------- ------------------------ -------------------------- 
 Aug 18 10:29:53 myhosr local0:debug joe: USER_Login root OK Tue Aug 18 10:29:53 2009 tsm Aug 18 10:29:53 myhost local0:debug joe: user: joe tty: /dev/pts/7 
 Aug 18 10:29:54 myhost local0:debug joe: USER_Exit root OK Tue Aug 18 10:29:54 2009 telnetd 
 Aug 18 10:29:54 myhost local0:debug joe: tty: User joe logged out on /dev/pts/7 
0 Karma

jfraiberg
Communicator

A simple fix for this issue is to send the stream to syslog first. This fixed my issue and reformatted the information correctly.

Here is the updated line used in the /etc/security/audit/streamcmds

/usr/sbin/auditstream |/usr/sbin/auditselect -e "event == PROC_Execute " | auditpr -v | /usr/bin/logger -p local0.debug &

jfraiberg
Communicator

You must also verify that you are indeed sending local0.debug to syslog as well. This can be verified in the /etc/syslog.conf

0 Karma
Get Updates on the Splunk Community!

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud  In today’s fast-paced digital ...

Observability protocols to know about

Observability protocols define the specifications or formats for collecting, encoding, transporting, and ...

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...