There are multiple problems here. Rising column should be autofilled by Splunk. Also, you first column returned should be a timestamp, so that Splunk can correctly index the data (in Splunk it's all about when things happen). Read Log File Analysis for Oracle 11g for a primer on how to get information from Oracle into Splunk. Also see this post about date formatting when selecting data from Oracle into Splunk.
A properly constructed dbmon-tail input should look like this:
[dbmon-tail://orcl/scheduler_job_run_details]
host = localhost
index = oracle_dbx
output.format = kv
output.timestamp = 0
output.timestamp.format = yyyy-MM-dd HH:mm:ss
output.timestamp.parse.format = yyyy-MM-dd HH:mm:ss
query = select to_char(log_date,'YYYY-MM-DD HH24:MI:SS') log_date, log_id, owner, job_name,
status, error# return_code, to_char(req_start_date,'YYYY-MM-DD HH24:MI:SS') req_start_date,
to_char(actual_start_date,'YYYY-MM-DD HH24:MI:SS'), actual_start_date, to_char(run_duration) run_duration,
instance_id, session_id, to_char(cpu_used) cpu_used, additional_info from dba_scheduler_job_run_details
{{WHERE $rising_column$ > ?}}
sourcetype = job_run_details
tail.rising.column = LOG_ID
interval = auto
table = scheduler_job_run_details
Note that the timestamp formats are defined, the timestamp of the event is converted to match using to_char (and is the first column returned in the query), and rising column is specified in its own parameter and autofilled by Splunk using the $rising_column$ alias. Also note that while I have broken this query across lines for readability, it is better in my experience to place the entire query in a single, unbroken line in your inputs.conf file.
... View more