I have to pull 30 million rows from database to splunk using DB Connect App.
I have kept FETCH_SIZE as 20,000.
But my data pull(indexing) stops after 20million rows or so.
I see this error on splunk_app_db_connect_server.log files
2019-04-10 23:18:32.350 -0700 [QuartzScheduler_Worker-16] ERROR org.easybatch.core.job.BatchJob - Unable to write records java.io.IOException: HTTP Error 503: Service Unavailable at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEventBatch(HttpEventCollector.java:112) at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEvents(HttpEventCollector.java:89) at com.splunk.dbx.server.dbinput.recordwriter.HecEventWriter.writeRecords(HecEventWriter.java:36) at org.easybatch.core.job.BatchJob.writeBatch(BatchJob.java:203) at org.easybatch.core.job.BatchJob.call(BatchJob.java:79) at org.easybatch.extensions.quartz.Job.execute(Job.java:59) at org.quartz.core.JobRunShell.run(JobRunShell.java:202) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.
How to resolve this ?
That error suggests problem with HEC (Http Event Collector), please check whether HEC port is available or not. I'll suggest to Reduce Fetch Size and increase Max Rows.
@harsmarvania57
I get 80% of records but it fails towards the end. So if HEC port wasn't available I wouldn't get any record, right ?
Splunk DB connect 3 uses HEC to ingest data into Splunk, 503 error indicates that HEC is overloaded and due to that you missed those events.
@harsmarvania57
So what do you suggest to handle this ??
Do we need to upgrade the infra ? or do we need to change the values of max rows and Fetch size ?
I'll suggest with less number of fetch size and increase Max rows (but not too much) and see how it performs.
Tried with different combinations of Fetch Size and Max rows. But doesn't work.
What is frequency of polling data from Database ? Is it Batch Mode or Rising Column mode ? What are server specification on which DB Connect is running ?