Couldn't find a similar question to this one. How are people retrieving logs from Mac OS X Sierra that are in the Unified Logging Database? This was a new logging technology released with Sierra (think it's stored in a binary database). It has way better and more detailed logs compared to the deprecated system.log file. There is practically nothing going to the system.log file in newer OS X versions... Ideally, I'd like to output data from the database and append it to the system.log file so it can get picked up with the rest of our old fashioned syslog (and be forwarded by using an old fashioned forwarding server over udp:514.) The asl.conf appears to be superseded by the Unified Logging as well. Any ideas?
Hi All, There is a Splunk Idea to track this issue: https://ideas.splunk.com/ideas/EID-I-562
You're welcome to go to the idea to follow it, vote for it and to add additional comments. There is active engineering work done on this, the best way to track that progress and help shape the outcome would be to comment in that Splunk Idea.
I ended up kludging a pretty generic scripted input that.
Runs the log show command from start_date to end_date.
Greps what you want using an include file.
Greps out stuff with an exclude file.
Thanks for posting. How did you managed to deal with "log show" permissions? Is there any other way than putting user "splunk" into the admin group?
dseditgroup -o edit -a splunk -t user admin
I don't currently have a mac to test with and I'm not a Mac guy but something like this might work.
Add this to /etc/sudoers to permit the splunk user to run log without a password.
splunk ALL = NOPASSWD: /path/to/log
Edit the uf_macintosh/bin/mac_log_monitor.sh and add sudo to the command.
log show --style syslog --start "$START_DATE" --end "$END_DATE" | egrep -f $INCLUDE | egrep -vf $EXCLUDE
sudo /path/to/log show --style syslog --start "$START_DATE" --end "$END_DATE" | egrep -f $INCLUDE | egrep -vf $EXCLUDE
Let me know how it goes!
Thanks for the quick response and advice. I had to modify config entry a little, but not much.
splunk ALL=(ALL) NOPASSWD: /usr/bin/log
Just the thing I've noticed. In case the "log show" is not allowed to be run or some other exception happens, the script still updates the last_run_date.txt file. I am thinking of modifying the script so it would update the last_run_date.txt file after log show command would be successfully run.
One possibility is to use osquery to pull the data from asl and put it into a file monitored by the splunk forwarder. And of course osquery exposes lots of other stuff you could grab too.
This works - the part I'm struggling with is figuring out what to grab.
Working with the log command in Sierra lets you play with the logged data but I don't see any guidance or recommendations on what to grab to meet standard audit requirements. If you can grab everything great - but if you are concerned about license capacity then most of the stuff going to asl looks like noise and should be filtered at the host.
Bumping xnumon as a pretty complete solution to this problem. You'll need to transform the input to be CIM compliant since there is not an app available at the time but out of the box it's a fairly on par with what sysmon offers.
One other rabbit hole I went down to get audit log data is using auditreduce + praudit
Again this works - audit data goes to splunk - but produces mostly noise. It checks a compliance box without being particularly useful.
I'll check out xnumon. Thanks.
It is probably best to contact Splunk, if you need the data from unified logging. That way they can push SPL-129734 internally. For now we rely on some scripts from the Unix TA, I have heard that others use https://osquery.io/
For anyone else stumbling accross this question: Splunk has an open enhancement request for this: SPL-129734. If this is something you need opening a case with a reference to this question might accelerate the implementation.
You could create scripted or modular inputs to run the "log show" command and ingest the events.
The difficulty will be :
I was afraid this might be the answer. In our current case we prefer to have real-time logs so that we could use some alerting. Having a script running a tail on the logs output is not ideal, but something I had thought of.
I'm reading your info on modular inputs, but it's a little confusing. I don't see a difference between using that vs a shell script.