Monitoring Splunk

DB Connect Timestamp Format

pmagee
Explorer

I am indexing a table which contains automated job results. I specified both a "Rising Column" and a "Timestamp Column" in the configuration of my monitor. The data is making it to the Splunk index, but with some issues. I often see the following error in the dbx.log file:

2013-01-08 15:52:12.062 dbx1556:WARN:ResultSetOutputPayload - Unrecognized timestamp format: '2013-01-08 15:51:50'

The date format of my timstamp column is formatted as "yyyy-MM-dd HH:mi:ss". It doesn't seem to make any difference whether I specify this in the monitor configuration or not.

Also, multiple rows of output from my table are often combined into single entries in the index, even though my rising column is tracking accurately.

How can I ensure that individual records are not combined and that the timestamp is correctly interpreted?

1 Solution

pmagee
Explorer

In more detail: I'm monitoring a table named dba_scheduler_job_run_details. I had to define my monitor as follows in the $SPLUNK_HOME/etc/apps/[your app name]/local/inputs.conf file. Note that I had to specify a SQL query for my monitor to get things back in the needed format.

[dbmon-tail://your_db_name/scheduler_job_run_details]
host = db_host
index = oracle_dbx
output.format = kv
output.timestamp = 0
output.timestamp.format = yyyy-MM-dd HH:mm:ss
output.timestamp.parse.format = yyyy-MM-dd HH:mm:ss
query = select \r\n  to_char(log_date,'YYYY-MM-DD HH24:MI:SS') log_date,\r\n  log_id,\r\n  owner,\r\n  job_name,\r\n  status,\r\n  error# return_code,\r\n  to_char(req_start_date,'YYYY-MM-DD HH24:MI:SS') req_start_date,\r\n to_char(actual_start_date,'YYYY-MM-DD HH24:MI:SS') actual_start_date,\r\n  to_char(run_duration) run_duration,\r\n  instance_id,\r\n  session_id,\r\n  to_char(cpu_used) cpu_used,\r\n  additional_info\r\nfrom dba_scheduler_job_run_details {{WHERE $rising_column$ > ?}}
sourcetype = job_run_details
tail.rising.column = LOG_ID
interval = auto
table = scheduler_job_run_details

My rising column is LOG_ID. I did not specify a timestamp column in the monitor, but made sure that the appropriate field was returned as the first column in my results. The actual query (without all the extra \r\n codes inserted by Splunk for line breaks) looks like this:

select 
  to_char(log_date,'YYYY-MM-DD HH24:MI:SS') log_date,
  log_id,
  owner,
  job_name,
  status,
  error# return_code,
  to_char(req_start_date,'YYYY-MM-DD HH24:MI:SS') req_start_date,
  to_char(actual_start_date,'YYYY-MM-DD HH24:MI:SS') actual_start_date,
  to_char(run_duration) run_duration,
  instance_id,
  session_id,
  to_char(cpu_used) cpu_used,
  additional_info
from dba_scheduler_job_run_details {{WHERE $rising_column$ > ?}}

Can't explain why this works, but it does. It is important to note that it can't be configured correctly (as of DBX v. 1.0.6) through the UI. The monitor must be set up through edits of the inputs.conf file for whatever application needs the data.

View solution in original post

pmagee
Explorer

In more detail: I'm monitoring a table named dba_scheduler_job_run_details. I had to define my monitor as follows in the $SPLUNK_HOME/etc/apps/[your app name]/local/inputs.conf file. Note that I had to specify a SQL query for my monitor to get things back in the needed format.

[dbmon-tail://your_db_name/scheduler_job_run_details]
host = db_host
index = oracle_dbx
output.format = kv
output.timestamp = 0
output.timestamp.format = yyyy-MM-dd HH:mm:ss
output.timestamp.parse.format = yyyy-MM-dd HH:mm:ss
query = select \r\n  to_char(log_date,'YYYY-MM-DD HH24:MI:SS') log_date,\r\n  log_id,\r\n  owner,\r\n  job_name,\r\n  status,\r\n  error# return_code,\r\n  to_char(req_start_date,'YYYY-MM-DD HH24:MI:SS') req_start_date,\r\n to_char(actual_start_date,'YYYY-MM-DD HH24:MI:SS') actual_start_date,\r\n  to_char(run_duration) run_duration,\r\n  instance_id,\r\n  session_id,\r\n  to_char(cpu_used) cpu_used,\r\n  additional_info\r\nfrom dba_scheduler_job_run_details {{WHERE $rising_column$ > ?}}
sourcetype = job_run_details
tail.rising.column = LOG_ID
interval = auto
table = scheduler_job_run_details

My rising column is LOG_ID. I did not specify a timestamp column in the monitor, but made sure that the appropriate field was returned as the first column in my results. The actual query (without all the extra \r\n codes inserted by Splunk for line breaks) looks like this:

select 
  to_char(log_date,'YYYY-MM-DD HH24:MI:SS') log_date,
  log_id,
  owner,
  job_name,
  status,
  error# return_code,
  to_char(req_start_date,'YYYY-MM-DD HH24:MI:SS') req_start_date,
  to_char(actual_start_date,'YYYY-MM-DD HH24:MI:SS') actual_start_date,
  to_char(run_duration) run_duration,
  instance_id,
  session_id,
  to_char(cpu_used) cpu_used,
  additional_info
from dba_scheduler_job_run_details {{WHERE $rising_column$ > ?}}

Can't explain why this works, but it does. It is important to note that it can't be configured correctly (as of DBX v. 1.0.6) through the UI. The monitor must be set up through edits of the inputs.conf file for whatever application needs the data.

pmagee
Explorer

Ok. I think I have this figured out, but the solution is NOT intuitive.

It boils down to this: in order to get things working, I had to convert the timestamp column in the database to text using Oracle's to_char(log_date,'YYYY-MM-DD HH24:MI:SS') function. Then I had to specify both of the following in the inputs.conf file:

output.timestamp.format = yyyy-MM-dd HH:mm:ss
output.timestamp.parse.format = yyyy-MM-dd HH:mm:ss

The output.timestamp.parse.format is detailed in the DBX documentation, but there is no way to set it from the user interface. Once both format filters were set, everything seemed to start working correctly.

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...