Splunk Search

SPLUNK DB Connect: Timestamp Not Working

scc00
Contributor

I have specified the column that holds the timestamps for the logs and the format the timestamps are in. However,the time in Splunk is the time when the logs are dumped into Splunk. I need the timestamps from the database to be accurate when dumped/tailed into Splunk.

Timestamp Format
"YYYY-MM-DD HH:MM:SS.000"

Does anyone know why this is happening?

1 Solution

pmagee
Explorer

In my case, I'm monitoring a table named dba_scheduler_job_run_details. I had to define my monitor as follows in the $SPLUNK_HOME/etc/apps/[your app name]/local/inputs.conf file. Note that I had to specify a SQL query for my monitor to get things back in the needed format.

[dbmon-tail://your_db_name/scheduler_job_run_details]
host = db_host
index = oracle_dbx
output.format = kv
output.timestamp = 0
output.timestamp.format = yyyy-MM-dd HH:mm:ss
output.timestamp.parse.format = yyyy-MM-dd HH:mm:ss
query = select \r\n  to_char(log_date,'YYYY-MM-DD HH24:MI:SS') log_date,\r\n  log_id,\r\n  owner,\r\n  job_name,\r\n  status,\r\n  error# return_code,\r\n  to_char(req_start_date,'YYYY-MM-DD HH24:MI:SS') req_start_date,\r\n to_char(actual_start_date,'YYYY-MM-DD HH24:MI:SS') actual_start_date,\r\n  to_char(run_duration) run_duration,\r\n  instance_id,\r\n  session_id,\r\n  to_char(cpu_used) cpu_used,\r\n  additional_info\r\nfrom dba_scheduler_job_run_details {{WHERE $rising_column$ > ?}}
sourcetype = job_run_details
tail.rising.column = LOG_ID
interval = auto
table = scheduler_job_run_details

My rising column is LOG_ID. I did not specify a timestamp column in the monitor, but made sure that the appropriate field was returned as the first column in my results. The actual query (without all the extra \r\n codes inserted by Splunk for line breaks) looks like this:

select 
  to_char(log_date,'YYYY-MM-DD HH24:MI:SS') log_date,
  log_id,
  owner,
  job_name,
  status,
  error# return_code,
  to_char(req_start_date,'YYYY-MM-DD HH24:MI:SS') req_start_date,
  to_char(actual_start_date,'YYYY-MM-DD HH24:MI:SS') actual_start_date,
  to_char(run_duration) run_duration,
  instance_id,
  session_id,
  to_char(cpu_used) cpu_used,
  additional_info
from dba_scheduler_job_run_details {{WHERE $rising_column$ > ?}}

Can't explain why this works (it's NOT intuitive at all, especially based on the UI), but it seems to be doing fine so far.

View solution in original post

pmagee
Explorer

In my case, I'm monitoring a table named dba_scheduler_job_run_details. I had to define my monitor as follows in the $SPLUNK_HOME/etc/apps/[your app name]/local/inputs.conf file. Note that I had to specify a SQL query for my monitor to get things back in the needed format.

[dbmon-tail://your_db_name/scheduler_job_run_details]
host = db_host
index = oracle_dbx
output.format = kv
output.timestamp = 0
output.timestamp.format = yyyy-MM-dd HH:mm:ss
output.timestamp.parse.format = yyyy-MM-dd HH:mm:ss
query = select \r\n  to_char(log_date,'YYYY-MM-DD HH24:MI:SS') log_date,\r\n  log_id,\r\n  owner,\r\n  job_name,\r\n  status,\r\n  error# return_code,\r\n  to_char(req_start_date,'YYYY-MM-DD HH24:MI:SS') req_start_date,\r\n to_char(actual_start_date,'YYYY-MM-DD HH24:MI:SS') actual_start_date,\r\n  to_char(run_duration) run_duration,\r\n  instance_id,\r\n  session_id,\r\n  to_char(cpu_used) cpu_used,\r\n  additional_info\r\nfrom dba_scheduler_job_run_details {{WHERE $rising_column$ > ?}}
sourcetype = job_run_details
tail.rising.column = LOG_ID
interval = auto
table = scheduler_job_run_details

My rising column is LOG_ID. I did not specify a timestamp column in the monitor, but made sure that the appropriate field was returned as the first column in my results. The actual query (without all the extra \r\n codes inserted by Splunk for line breaks) looks like this:

select 
  to_char(log_date,'YYYY-MM-DD HH24:MI:SS') log_date,
  log_id,
  owner,
  job_name,
  status,
  error# return_code,
  to_char(req_start_date,'YYYY-MM-DD HH24:MI:SS') req_start_date,
  to_char(actual_start_date,'YYYY-MM-DD HH24:MI:SS') actual_start_date,
  to_char(run_duration) run_duration,
  instance_id,
  session_id,
  to_char(cpu_used) cpu_used,
  additional_info
from dba_scheduler_job_run_details {{WHERE $rising_column$ > ?}}

Can't explain why this works (it's NOT intuitive at all, especially based on the UI), but it seems to be doing fine so far.

mholloway_uk
Explorer

This did the trick. Massive thanks.

0 Karma

scc00
Contributor

Thank you. This helped immensely. The logs are coming in at the right times now and separately.

0 Karma

pmagee
Explorer

I had exactly the same problem. In order to get things working, I had to convert the timestamp column in the database to text using Oracle's to_char(log_date,'YYYY-MM-DD HH24:MI:SS') function (if not using Oracle, use whatever equivalent function is applicable for your database). Then I had to specify both of the following in the inputs.conf file:

output.timestamp.format = yyyy-MM-dd HH:mm:ss
output.timestamp.parse.format = yyyy-MM-dd HH:mm:ss

The output.timestamp.parse.format is detailed in the DBX documentation, but there is no way to set it from the user interface. Once the timestamp was converted to text and both format filters were set to match the output, everything seemed to start working correctly.

scc00
Contributor

So I am not a database person so do you mean convert the rising column to text at the database level or within Splunk DB Connect app?

And i'm updating the inputs.conf which is located at $SplunkHome/etc/system/local?

0 Karma

scc00
Contributor

Please note, I think the Rising Column is not being added to the interesting fields and as such Splunk thinks each Tail addition is one log. Can we make it so that the Rising Column is added to the Tail updates?

0 Karma
Get Updates on the Splunk Community!

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...