All Apps and Add-ons

DB Connect Oracle recordwriter error

christopergwe
New Member

Hi,
establishing a connection to an Oracle DB via Splunk DB Connect works well as long as I use the manual DataLab / Inputs / ExecuteSQL workflow. However, running this as a regular service doesn't work. There seems to be a writing issue.
It worked before until Splunk Enterprise had been moved to a different server and back again. Maybe I timestamp issue occurred, but I couldn't figure out it that's the real problem.
Input health is 0% and here's the error code I see when using the index=_internal function

2019-02-21 14:22:16.366 +0100 [QuartzScheduler_Worker-7] ERROR c.s.d.s.dbinput.recordwriter.CheckpointUpdater - action=skip_checkpoint_update_batch_writing_failed java.io.IOException: HTTP Error 400, HEC response body: {"text":"Error in handling indexed fields","code":15,"invalid-event-number":0}, trace: HttpResponseProxy{HTTP/1.1 400 Bad Request [Date: Thu, 21 Feb 2019 13:22:16 GMT, Content-Type: application/json; charset=UTF-8, X-Content-Type-Options: nosniff, Content-Length: 78, Vary: Authorization, Connection: Keep-Alive, X-Frame-Options: SAMEORIGIN, Server: Splunkd] ResponseEntityProxy{[Content-Type: application/json; charset=UTF-8,Content-Length: 78,Chunked: false]}} at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEventBatch(HttpEventCollector.java:132) at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEvents(HttpEventCollector.java:96) at com.splunk.dbx.server.dbinput.recordwriter.HecEventWriter.writeRecords(HecEventWriter.java:36) at org.easybatch.core.job.BatchJob.writeBatch(BatchJob.java:203) at org.easybatch.core.job.BatchJob.call(BatchJob.java:79) at org.easybatch.extensions.quartz.Job.execute(Job.java:59) at org.quartz.core.JobRunShell.run(JobRunShell.java:202) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
0 Karma

kheo_splunk
Splunk Employee
Splunk Employee

As HTTP 400 Error in DB Connect app typically can come from wrong events data itself, you can check HEC events whether they show any missing metadata or garbage data by enabling TRACE before they are uploaded to HEC.

1] TRACE can be enabled without stopping Splunk using DB Connect -> Configuration -> Settings -> Logging
alt text

2] Once the affected input is run and events are not shown on Splunk side after TRACE is enabled, you can check splunk_app_db_connect_server.log if there is any missing metadata like source/sourcetype/index/host at the end of events or any garbage or null included in the event.

Example

2019-03-12 20:09:00.025 -0400  [QuartzScheduler_Worker-16] DEBUG c.s.d.s.dbinput.task.processors.EventMarshaller - action=finish_format_hec_events record=Record: {header=[RisingInputRecordHeader{risingColumnValue='25059'} number=8, source="dbx3", creationDate="2019-03-12 16:08:07.0"], payload=[{"time":"1552421287.000","event":"2019-03-12 16:08:07.000, ID=\"25059\", SALES_MANAGER=\"Nick Everd\", PRODUCT=\"0\", SALES_QTY=\"52\", SALES_AMOUNT=\"52000\", SALES_DATE=\"2019-03-12 20:08:07.0\"",**"source":"dbx3","sourcetype":"dbx3","index":"dbx3","host":"kheolin01"**}]}
0 Karma

christopergwe
New Member

Activation of Trace at the dbinput level worked.
It seems I do get the metainformation on source, sourcetiype, index, host as they should be but some "Error in handling indexed fields" occurs

could there be an error in proceeding on the data? I received the data stream until Aug. 31st but was unable to resume this process.

2019-03-14 12:39:34.093 +0100 [QuartzScheduler_Worker-15] DEBUG c.s.d.s.dbinput.recordreader.DbInputRecordReader - action=closing_db_reader task=Alvin_Log
2019-03-14 12:39:34.093 +0100 [QuartzScheduler_Worker-15] INFO org.easybatch.core.job.BatchJob - Job 'Alvin_Log' finished with status: FAILED
2019-03-14 12:39:34.093 +0100 [QuartzScheduler_Worker-15] ERROR org.easybatch.core.job.BatchJob - Unable to write records
java.io.IOException: HTTP Error 400, HEC response body: {"text":"Error in handling indexed fields","code":15,"invalid-event-number":0}, trace: HttpResponseProxy{HTTP/1.1 400 Bad Request [Date: Thu, 14 Mar 2019 11:39:34 GMT, Content-Type: application/json; charset=UTF-8, X-Content-Type-Options: nosniff, Content-Length: 78, Vary: Authorization, Connection: Keep-Alive, X-Frame-Options: SAMEORIGIN, Server: Splunkd] ResponseEntityProxy{[Content-Type: application/json; charset=UTF-8,Content-Length: 78,Chunked: false]}}
at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEventBatch(HttpEventCollector.java:132)
at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEvents(HttpEventCollector.java:96)
at com.splunk.dbx.server.dbinput.recordwriter.HecEventWriter.writeRecords(HecEventWriter.java:36)
at org.easybatch.core.job.BatchJob.writeBatch(BatchJob.java:203)
at org.easybatch.core.job.BatchJob.call(BatchJob.java:79)
at org.easybatch.extensions.quartz.Job.execute(Job.java:59)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
2019-03-14 12:39:34.081 +0100 [QuartzScheduler_Worker-15] INFO c.s.d.s.dbinput.recordwriter.HttpEventCollector - action=writing_events_via_http_event_collector record_count=1000
2019-03-14 12:39:34.078 +0100 [QuartzScheduler_Worker-15] INFO c.s.dbx.server.dbinput.recordwriter.HecEventWriter - action=write_records batch_size=1000
2019-03-14 12:39:34.078 +0100 [QuartzScheduler_Worker-15] DEBUG c.s.d.s.dbinput.task.processors.EventMarshaller - action=start_format_hec_events_from_payload record=Record: {header=[RisingInputRecordHeader{risingColumnValue='2018-08-31 19:28:46.0'} number=1000, source="Alvin_Log", creationDate="2018-08-31 19:28:46.0"], payload=[EventPayload{fieldNames=[FID, AUFTRAG_NR, DATUM, FBG_NR, LOET_PROG, RECHNER, FID_MOB, CARRIER], row=[T-K828700909, 42421363, 2018-08-31 19:28:46.0, A5E36675927, 34, MD1KS4WC, E0040100269A0909, 4711100]}]}
2019-03-14 12:39:34.078 +0100 [QuartzScheduler_Worker-15] DEBUG c.s.d.s.dbinput.task.processors.EventMarshaller - action=finish_format_hec_events record=Record: {header=[RisingInputRecordHeader{risingColumnValue='2018-08-31 19:26:10.0'} number=998, source="Alvin_Log", creationDate="2018-08-31 19:26:10.0"], payload=[{"time":"1535736370,000","event":"2018-08-31 19:26:10.000, FID=\"T-K828700890\", AUFTRAG_NR=\"42421363\", DATUM=\"2018-08-31 19:26:10.0\", FBG_NR=\"A5E36675927\", LOET_PROG=\"34\", RECHNER=\"MD1KS4WC\", FID_MOB=\"E00401009EC8BAC8\", CARRIER=\"4711100\"","source":"Alvin_Log","sourcetype":"dwh","index":"dwh","host":"alvin_log"}]}

0 Karma
Get Updates on the Splunk Community!

Enter the Splunk Community Dashboard Challenge for Your Chance to Win!

The Splunk Community Dashboard Challenge is underway! This is your chance to showcase your skills in creating ...

.conf24 | Session Scheduler is Live!!

.conf24 is happening June 11 - 14 in Las Vegas, and we are thrilled to announce that the conference catalog ...

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...