Monitoring Splunk

Oracle auditing to XML file and loaded into splunk

thesand20
New Member

We are evaluating options for tools to consume XML audit logs generated from enabling Oracles native auditing. It seems that SPLUNK has that capability but I am unable to find real world test cases of users who have this implemented in a production environment.

I am looking to identify the following:
- level of effort to configure SPLUNK to read the XML audit files
- How to translate audit_action and priv_used numbers to text values
- Any performance impact with SPLUNK agent monitoring many XML log files; is there a max# of sessions that can be spawned?
- Ease of reporting on the audit data once it is loaded into SPLUNK (a list of some of our reporting requirements)
Follow change request from logon to logoff
All users who logged in from 6am - 10am on a given day
Identify user who altered a given table at a specified time
List all change requests for a specified time frame
List of logons from Employee id's using generic database accounts

Tags (2)
0 Karma

pmdba
Builder
0 Karma

cjimenezinc
New Member

I was trying to get the mentioned pdf file on this answers and I get:

404 — File not found.

Please let me know where I can find this information, thanks.

my email is cjimenezinc@yahoo.com

0 Karma

pmdba
Builder

It would perhaps depend on your OS. On Linux (or UNIX) I have found that unless you are configured for the extended XML audit trail because you want to capture SQL statements, it is better to use the standard OS audit trail for Splunk monitoring. XML includes a LOT of extra characters in the various tags that really drive up the data volume that Splunk has to process. The standard OS audit trail, dumped to syslog or rsyslog can be read by Splunk with no problem. and includes all of the same information, but at about half of the throughput cost. I have deployed these monitors on fairly busy systems and not observed any significant performance impacts.

I know it has been a while since this question was originally posted, but if you haven't worked out a solution yet, or for anyone else who is looking for answers, there are complete instructions on how to configure audit trails in Oracle, Splunk inputs, field extractions, audit_action and priv_used translation, and a lot more available in the white paper "Real-Time Oracle 11g Log Analysis" available here: http://pmdba.files.wordpress.com/2013/05/real-time-oracle-11g-log-file-analysis.pdf

0 Karma

a_splunk_user
Path Finder

Using Splunk 5.0.1 on a Windows box.

My experience is that oracle creates many, many of these little xml audit files. Currently we have over 1.3 billion events over the course of a year or so. Our installation of Splunk has had no problem keeping up with tens of thousands of files per day, across multiple database servers.
If there is a hiccup in service for whatever reason, the system has caught up in less than a day.

On the indexer I have modified the inputs.conf file with batch stanzas with the "move_policy = sinkhole" option so that the files don't fill up the audit directory I have configured via oracle - instead, Splunk will read the xml file and then delete it. You will notice lots of error messages, in say, S.O.S. app, because Splunk will try to access xml files even as they are in the process of being written by Oracle. This is normal. We have had a few instances where Splunk has successfully removed a file before Oracle was done with it, but this is relatively rare.

Performance-wise is again dependent on your database activity. We have about 1GB indexing per day and it can be sluggish on the reporting side. We ended up moving all the logins and logouts to a separate index.

For reporting, we specify the sourcetype in the batch stanza mentioned above. There are a number of ways you can handle xml input, I have simply set up a few custom field extractions.

In the end I would prefer Oracle to be able to write to a custom Windows log, set to overwrite itself when it fills up. Unfortunately Oracle has instead made it so that when the Windows log option is set, it just points to the Application log. With our volume it essentially overwrites itself so quickly that all you see is oracle logon/logoff data, and little else of consequence in a Windows environment.

Just my 2 cents.

Simeon
Splunk Employee
Splunk Employee

Answering part of this...

  • Splunk has commands to extract fields and index xml or json data.
  • Translation of numbers to text can be done via lookups.
  • Splunk's universal forwarder is very lightweight and is not limited to the number of files it can monitor (can reach the millions).
0 Karma

Simeon
Splunk Employee
Splunk Employee

You should get in touch with your local sales engineer as they can help answer this question more thoroughly. You should split out your question to be a bit more singular and specific - that will let people more easily answer things.

0 Karma
Get Updates on the Splunk Community!

.conf24 | Day 0

Hello Splunk Community! My name is Chris, and I'm based in Canberra, Australia's capital, and I travelled for ...

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

 (view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...