All Apps and Add-ons

Issue integrating Oracle DB with DB Connect Splunk

bryanblch
New Member

We are trying to integrate an Oracle DB, batch mode seems to work fine and we are able to get data with the query:

bryanblch_0-1756480047962.png

However, when changed to "Rising Mode" and using the "EVENT_TIMESTAMP" column as "Rising Column"

bryanblch_1-1756480120839.png

We are getting following error:

"error in dbxquery command external search command exited unexpectedly"

bryanblch_2-1756480153352.png

We have another DB that has similar Rising configuration and we are able to run the query and save it without problems

bryanblch_3-1756480178833.png

We opened a case with support: 

Case #3806630 Cannot integrate Oracle DB

  1. We tried multiple functions in the batch query input but still we were facing issue in retrieving data from the database while using the batch input.
  2. So we tried using the below mentioned query:
     
    SELECT u.*
    FROM unified_audit_trail u
    WHERE event_timestamp > TO_TIMESTAMP('2025-08-27 16:00:01.195334', 'YYYY-MM-DD HH24:MI:SS.FF6')
    FETCH NEXT 10 ROWS ONLY;
     
    And we were able to retrieve data in the batch input.
  3. Then we switched to the Rising column input, but we cannot use this TO_TIMESTAMP function in the rising column, as it does not allow us to use that in where condition of rising column input.
  4. If we try to  use it without the function then it was not getting the data as the query was getting timed out again and again due to huge volume of data present in the database table.
Any suggestions at this point?

 

 

Support suggested the following:

We recommend you could create an alias column in the database table that already has the function converted Time Stamp so we do not need to have that mentioned in the rising column input query or if that is not feasible for your DB team then they can create a stored procedure on the database end that could be called from the batch input query, to fetch the records for the last 10 mins, and set the CRON for that batch input for every 10 mins in the DB Connect add-on. That way you can retrieve the db data into splunk that got ingested in the last 10 mins in the Database.

However seems that is not an option for the customer.

 

Labels (2)
0 Karma

isoutamo
SplunkTrust
SplunkTrust

Also the best practice is not to use timestamp field as a checkpoint column. There are many reasons for this. With high volume db this field can contains several rows with same value. Also converting checkpoint values between different data types is not a good practice.

The best option is use enough big serial as a checkpoint field. You also must have index for that field.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Have you actually checked the logs to see _how_ the query failed? Have you gathered and reviewed debug logs?

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Introduction to Splunk AI

How are you using AI in Splunk? Whether you see AI as a threat or opportunity, AI is here to stay. Lucky for ...

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...

Maximizing the Value of Splunk ES 8.x

Splunk Enterprise Security (ES) continues to be a leader in the Gartner Magic Quadrant, reflecting its pivotal ...