All Apps and Add-ons

DBX tail monitor timestamping Oracle


I have added a couple of queries to extract information from Oracle databases.

The DB Info shows that I have a field that is a DATE object. In oracle thevalue is actually , which is actually a timestamp yet in Splunk we only get the date value from timestamp and not the full timestamp.

So when using the DATE field for timestamping all events are indexed as occuring ad midnight and not the actual time.

This also means we are unable to use a DATE field as the rising column.


I've tried using JDBC driver to convert date to timestamp but still no luck!
you can see my efforts over here:

0 Karma


I would think that you could use a date field as a rising column, as long as you convert the format to something that Splunk will recognize as a rising integer. This means specifying a query in your monitor instead of an actual table/column.

For example, I'm monitoring a table named DBA_SCHEDULER_JOB_RUN_DETAILS. This table provides a rising LOG_ID column that makes for a good test comparison to my derived column. I used the following test query to compare a derived log_id column (MY_LOG_ID) to the actual LOG_ID generated by Oracle:

  to_number(to_char(log_date,'JSSSSSFF')) my_log_id, 
from dba_scheduler_job_run_details 
order by 2 asc;

For my derived LOG_ID, I converted the timestamp into a sequential integer using the "J", "SSSSS", and "FF" converstion functions for Julian Day, seconds past midnight, and fractional seconds. As the results show, the derived MY_LOG_ID is correctly sorting my records by the log_date, even down to the fractional second level. In this particular case, it appears that LOG_ID is assigned as the job begins, while LOG_DATE is assigned when the job ends, so the sort order may actually be more accurate using the drived value.

LOG_DATE                                                              MY_LOG_ID     LOG_ID
-------------------------------------------------------------------- ---------- ----------
30-JUN-13 AM -07:00                                  2.4565E+17      47474
30-JUN-13 AM -07:00                                  2.4565E+17      47467
30-JUN-13 AM -07:00                                  2.4565E+17      47470
30-JUN-13 AM -07:00                                  2.4565E+17      47468
30-JUN-13 AM -07:00                                  2.4565E+17      47471
30-JUN-13 AM -07:00                                  2.4565E+17      47475
30-JUN-13 AM -07:00                                  2.4565E+17      47476
30-JUN-13 AM -07:00                                  2.4565E+17      47477
30-JUN-13 AM -07:00                                  2.4565E+17      47478
30-JUN-13 AM -07:00                                  2.4565E+17      47479
30-JUN-13 AM -07:00                                  2.4565E+17      47480

Now I can define my monitor as follows in the $SPLUNK_HOME/etc/apps/[your app name]/local/inputs.conf file. Note that I had to specify a SQL query for my monitor to get things back in the needed format.

host = db_host
index = oracle_dbx
output.format = kv
output.timestamp = 0
output.timestamp.format = yyyy-MM-dd HH:mm:ss
output.timestamp.parse.format = yyyy-MM-dd HH:mm:ss
query = select \r\n  to_char(log_date,'YYYY-MM-DD HH24:MI:SS') log_date,\r\n  to_number(to_char(log_date,'JSSSSSFF')) my_log_id,\r\n  owner,\r\n  job_name,\r\n  status,\r\n  error# return_code,\r\n  to_char(req_start_date,'YYYY-MM-DD HH24:MI:SS') req_start_date,\r\n to_char(actual_start_date,'YYYY-MM-DD HH24:MI:SS') actual_start_date,\r\n  to_char(run_duration) run_duration,\r\n  instance_id,\r\n  session_id,\r\n  to_char(cpu_used) cpu_used,\r\n  additional_info\r\nfrom dba_scheduler_job_run_details {{WHERE $rising_column$ > ?}}
sourcetype = job_run_details
tail.rising.column = MY_LOG_ID
interval = auto
table = scheduler_job_run_details

My rising column is MY_LOG_ID. I did not specify a timestamp column in the monitor, but made sure that the appropriate field was returned as the first column in my results. The actual query (without all the extra rn codes inserted by Splunk for line breaks) looks like this:

  to_char(log_date,'YYYY-MM-DD HH24:MI:SS') log_date,
  to_number(to_char(log_date,'JSSSSSFF'))  my_log_id,
  error# return_code,
  to_char(req_start_date,'YYYY-MM-DD HH24:MI:SS') req_start_date,
  to_char(actual_start_date,'YYYY-MM-DD HH24:MI:SS') actual_start_date,
  to_char(run_duration) run_duration,
  to_char(cpu_used) cpu_used,
from dba_scheduler_job_run_details {{WHERE $rising_column$ > ?}}

It is important to note that this monitor can't be configured correctly (as of DBX v. 1.0.6) through the UI. The monitor must be set up through edits of the inputs.conf file for whatever application needs the data.


Using a DATE field as a rising column doesn't seem to work. I think only autoincrementing integer fields would work correctly

I am currently using a query to dump the data daily, because I couldn't get the tail to work correcty for me:


I must warn you that you have to add

output.timestamp.parse.format = yyyy-MM-dd HH:mm:ss

to you inputs.conf, because the GUI seems to add a

output.timestamp.format = yyyy-MM-dd HH:mm:ss

which isn't in the specs...

Get Updates on the Splunk Community!

Observability | How to Think About Instrumentation Overhead (White Paper)

Novice observability practitioners are often overly obsessed with performance. They might approach ...

Cloud Platform | Get Resiliency in the Cloud Event (Register Now!)

IDC Report: Enterprises Gain Higher Efficiency and Resiliency With Migration to Cloud  Today many enterprises ...

The Great Resilience Quest: 10th Leaderboard Update

The tenth leaderboard update (11.23-12.05) for The Great Resilience Quest is out >> As our brave ...