All Apps and Add-ons

Error with DBConnect App Inputs Verion:3.1.4

Sidharda
Path Finder

Hello Experts,

I am using DBConnect app of version 3.1.4. with Oracle DB. I have set up a SQL and saved the DB input.

But data is not showing up in the index. When I did a search on _internal index, I have got the below error. Any thoughts.?

Error:
yyyy-MM-dd HH:mm:ss.S -6:00 [QuartzScheduler_Worker-28] ERROR org.easybatch.core.job.BatchJob - Unable to process Record: {header=[number=37458, source="xxxxx", creationDate="xxxxx"], payload=[HikariProxyResultSet@xxxxx wrapping oracle.jdbc.driver.ForwardOnlyResultSet@xxxx]}

stefan_d
Path Finder

Hi

I also got this error after my input has been running fine for a while. In my case, I think it's a bug (or new design feature 🙂 ) that has to do with how DBConnect internally 'caches' the structure of the SQL table/view data for the input.

My input query used a rising column and looked something like:

SELECT * FROM "ABC-VIEW"
WHERE lastModifiedDate > ?
ORDER BY lastModifiedDate ASC


As I mentioned, all worked well, until one sunny day that I got this error:


xxxxxxxx +0200 [QuartzScheduler_Worker-19] ERROR org.easybatch.core.job.BatchJob - Unable to process Record: {header=[number=5983, source="xxxxx", creationDate="xxxxxx"], payload=[HikariProxyResultSet@2004807770 wrapping SQLServerResultSet:150443]}
java.lang.IndexOutOfBoundsException: Index: 33, Size: 32
at java.util.ArrayList.rangeCheck(ArrayList.java:657)
at java.util.ArrayList.get(ArrayList.java:433)
at com.splunk.dbx.server.dbinput.task.processors.EventMarshaller.formatEvent(EventMarshaller.java:99)
at com.splunk.dbx.server.dbinput.task.processors.EventMarshaller.toJson(EventMarshaller.java:73)
at com.splunk.dbx.server.dbinput.task.processors.EventMarshaller.processRecord(EventMarshaller.java:46)
at com.splunk.dbx.server.dbinput.task.processors.EventMarshaller.processRecord(EventMarshaller.java:25)
at org.easybatch.core.processor.CompositeRecordProcessor.processRecord(CompositeRecordProcessor.java:38)
at org.easybatch.core.job.BatchJob.processRecord(BatchJob.java:179)
at org.easybatch.core.job.BatchJob.readAndProcessBatch(BatchJob.java:152)
at org.easybatch.core.job.BatchJob.call(BatchJob.java:78)
at org.easybatch.extensions.quartz.Job.execute(Job.java:59)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)


After some investigation, the DB Admin informed me that the structure of the view changed (columns were added).

I saved the input (open, execute SQL, next, save) via the GUI and it started working again. 

For me it was a nice capability to setup a DB input with "Select * ..." and brag about Splunk capable of handling DB changes on the fly. But I subsequently had to swallow the brag as it turns out in DBConnect V3x things can break:

  • when the rising column's position change
  • ..and now it seems when the column/view structure changes

Would like to hear if anyone else came accross this or can give a better explanation.


Happy Splunking!

S

deangoris
Explorer

Same effect here.
Re-saving the input fixed it.

splunkoptimus
Path Finder

Resaving the inputs worked for me.

Tags (1)
0 Karma

Yepeza
Path Finder

Thanks for the great explanation. This worked for me as well. We are on DBX 3.3.0. "Re-saving" the input worked and the "cached" table got reset and the records got ingested correctly.

0 Karma

ron451
Engager

Hi,

since I'm having a similar problem I'm curious, have you got the problem solved? If so, what was the issue, because I stuck with same error, but already on header1.

Regards, Aaron

Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...