- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

In my environment, I have configured input by using DB connect to SQL server.
Then after checking the log, the following error was continuously being outputted.
[QuartzScheduler_Worker-14] ERROR org.easybatch.core.job.BatchJob - Unable to open record reader
com.microsoft.sqlserver.jdbc.SQLServerException: The query has timed out
at com.microsoft.sqlserver.jdbc.TDSCommand.checkForInterrupt(IOBuffer.java:6498)
at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:67)
at com.microsoft.sqlserver.jdbc.SQLServerResultSet.<init>(SQLServerResultSet.java:310)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1646)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:426)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:372)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:6276)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1794)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:184)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:159)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeQuery(SQLServerPreparedStatement.java:284)
at com.splunk.dbx.connector.connector.impl.JdbcConnectorImpl.executeQuery(JdbcConnectorImpl.java:291)
at com.splunk.dbx.connector.connector.impl.JdbcConnectorImpl.executeQuery(JdbcConnectorImpl.java:331)
at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.open(DbInputRecordReader.java:80)
at org.easybatch.core.job.BatchJob.openReader(BatchJob.java:117)
at org.easybatch.core.job.BatchJob.call(BatchJob.java:74)
at org.easybatch.extensions.quartz.Job.execute(Job.java:59)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
However, I don't know it is the complete log, but the log is being inputted every time.
Is this error due to query timeout setting related to answers below?
https://answers.splunk.com/answers/506917/dbconnect-2-input-timed-out-can-i-increase-the-tim.html
Also will some logs be captured even if timeout occurs?
It would be greatly appreciated if anyone could tell me.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hello Yutaka,
A timed out query usually means that you failed to connect to the service port due to a port being blocked by a firewall. Please try running the following from your dbconnect's host CLI:
telnet yourdbserver portnumber
If this fails that means there is something blocking and you should check with your firewall team where the traffic is being denied.
Regards,
David
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hello Yutaka,
A timed out query usually means that you failed to connect to the service port due to a port being blocked by a firewall. Please try running the following from your dbconnect's host CLI:
telnet yourdbserver portnumber
If this fails that means there is something blocking and you should check with your firewall team where the traffic is being denied.
Regards,
David
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Thank you for answer!
Since data is actually captured,
It is hard to think that connection is blocked.
If the connection is not blocked, ie the port is free,
Again the following settings in "db_input.conf" are relevant?
query_timeout =
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Can you please post an extract of your db_inputs config file ?
If you're using incremental db input config then you are not going to lose data since when the input will work it will take everything after the checkpoint and store the new checkpoint value for the next successful import.
Changing the query_timeout can help, to double check if it's going to change something test a very large dbxquery see if it works. If it does then that's not your problem. It could be that your DB has some restrictions on the max amount of queries it can receive per day from your account or a limit on the max batch size that can be exported 🙂
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Thank you for answer.
Excuse me. It is difficult to release the setting value.
Yeah I'm using "rising column input", so I never lose data.
However, if a timeout occurs, my alert may miss the data that is should be detected.
So I think I avoid it by changing the value of query_timeout.
Also I checked the DB side timeout value, but it defaulted to 10 minutes, so the problem is the timeout value of Splunk DB connect side.
