All Apps and Add-ons

DBX tail monitor timestamping Oracle

domgkc
Explorer

I have added a couple of queries to extract information from Oracle databases.

The DB Info shows that I have a field that is a DATE object. In oracle thevalue is actually , which is actually a timestamp yet in Splunk we only get the date value from timestamp and not the full timestamp.

So when using the DATE field for timestamping all events are indexed as occuring ad midnight and not the actual time.

This also means we are unable to use a DATE field as the rising column.

zvaseqi
Explorer

I've tried using JDBC driver to convert date to timestamp but still no luck!
you can see my efforts over here:
http://answers.splunk.com/answers/246656/getting-mongodb-data-into-splunk-on-date-rising-co.html

0 Karma

pmdba
Builder

I would think that you could use a date field as a rising column, as long as you convert the format to something that Splunk will recognize as a rising integer. This means specifying a query in your monitor instead of an actual table/column.

For example, I'm monitoring a table named DBA_SCHEDULER_JOB_RUN_DETAILS. This table provides a rising LOG_ID column that makes for a good test comparison to my derived column. I used the following test query to compare a derived log_id column (MY_LOG_ID) to the actual LOG_ID generated by Oracle:

select
  log_date, 
  to_number(to_char(log_date,'JSSSSSFF')) my_log_id, 
  log_id 
from dba_scheduler_job_run_details 
order by 2 asc;

For my derived LOG_ID, I converted the timestamp into a sequential integer using the "J", "SSSSS", and "FF" converstion functions for Julian Day, seconds past midnight, and fractional seconds. As the results show, the derived MY_LOG_ID is correctly sorting my records by the log_date, even down to the fractional second level. In this particular case, it appears that LOG_ID is assigned as the job begins, while LOG_DATE is assigned when the job ends, so the sort order may actually be more accurate using the drived value.

LOG_DATE                                                              MY_LOG_ID     LOG_ID
-------------------------------------------------------------------- ---------- ----------
30-JUN-13 04.44.25.508439 AM -07:00                                  2.4565E+17      47474
30-JUN-13 04.44.25.902010 AM -07:00                                  2.4565E+17      47467
30-JUN-13 04.44.25.988791 AM -07:00                                  2.4565E+17      47470
30-JUN-13 04.44.26.061229 AM -07:00                                  2.4565E+17      47468
30-JUN-13 04.44.27.953932 AM -07:00                                  2.4565E+17      47471
30-JUN-13 04.45.00.211876 AM -07:00                                  2.4565E+17      47475
30-JUN-13 04.50.00.226114 AM -07:00                                  2.4565E+17      47476
30-JUN-13 04.55.00.225589 AM -07:00                                  2.4565E+17      47477
30-JUN-13 05.00.00.381395 AM -07:00                                  2.4565E+17      47478
30-JUN-13 05.00.00.388786 AM -07:00                                  2.4565E+17      47479
30-JUN-13 05.00.29.214254 AM -07:00                                  2.4565E+17      47480

Now I can define my monitor as follows in the $SPLUNK_HOME/etc/apps/[your app name]/local/inputs.conf file. Note that I had to specify a SQL query for my monitor to get things back in the needed format.

[dbmon-tail://your_db_name/scheduler_job_run_details]
host = db_host
index = oracle_dbx
output.format = kv
output.timestamp = 0
output.timestamp.format = yyyy-MM-dd HH:mm:ss
output.timestamp.parse.format = yyyy-MM-dd HH:mm:ss
query = select \r\n  to_char(log_date,'YYYY-MM-DD HH24:MI:SS') log_date,\r\n  to_number(to_char(log_date,'JSSSSSFF')) my_log_id,\r\n  owner,\r\n  job_name,\r\n  status,\r\n  error# return_code,\r\n  to_char(req_start_date,'YYYY-MM-DD HH24:MI:SS') req_start_date,\r\n to_char(actual_start_date,'YYYY-MM-DD HH24:MI:SS') actual_start_date,\r\n  to_char(run_duration) run_duration,\r\n  instance_id,\r\n  session_id,\r\n  to_char(cpu_used) cpu_used,\r\n  additional_info\r\nfrom dba_scheduler_job_run_details {{WHERE $rising_column$ > ?}}
sourcetype = job_run_details
tail.rising.column = MY_LOG_ID
interval = auto
table = scheduler_job_run_details

My rising column is MY_LOG_ID. I did not specify a timestamp column in the monitor, but made sure that the appropriate field was returned as the first column in my results. The actual query (without all the extra rn codes inserted by Splunk for line breaks) looks like this:

select 
  to_char(log_date,'YYYY-MM-DD HH24:MI:SS') log_date,
  to_number(to_char(log_date,'JSSSSSFF'))  my_log_id,
  owner,
  job_name,
  status,
  error# return_code,
  to_char(req_start_date,'YYYY-MM-DD HH24:MI:SS') req_start_date,
  to_char(actual_start_date,'YYYY-MM-DD HH24:MI:SS') actual_start_date,
  to_char(run_duration) run_duration,
  instance_id,
  session_id,
  to_char(cpu_used) cpu_used,
  additional_info
from dba_scheduler_job_run_details {{WHERE $rising_column$ > ?}}

It is important to note that this monitor can't be configured correctly (as of DBX v. 1.0.6) through the UI. The monitor must be set up through edits of the inputs.conf file for whatever application needs the data.

krugger
Communicator

Using a DATE field as a rising column doesn't seem to work. I think only autoincrementing integer fields would work correctly

I am currently using a query to dump the data daily, because I couldn't get the tail to work correcty for me:

SELECT to_char(LOG_DATE,'YYYY-MM-DD HH24:MI:SS') AS LOGDATE, ... FROM ... WHERE ... AND LOG_DATE > CURRENT_DATE - 1

I must warn you that you have to add

output.timestamp.parse.format = yyyy-MM-dd HH:mm:ss

to you inputs.conf, because the GUI seems to add a

output.timestamp.format = yyyy-MM-dd HH:mm:ss

which isn't in the specs...

Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...