Due to some database schema changes we're having to increase the timeouts on many of our mysql and redshift queries to our databases. As such it would appear we ran out of connectors in our connection pool (which I believe is default to 😎 and were thus unable to make new connections for subsequent runs.
Here is the error I see in the log:
2018-06-05 21:15:25.131 +0000 [dw-53 - GET /api/connections/awesome_db] ERROR com.splunk.dbx.connector.ConnectorFactory - action=failed_to_load_get_connection error=unnamed_pool_939597389_jdbc_mysql_//awesome_db - Connection is not available, request timed out after 30000ms. cause={}com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failureThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at sun.reflect.GeneratedConstructorAccessor182.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:989) at com.mysql.jdbc.MysqlIO.
My question what impact would increasing the available connectors in the pool have on our splunk installation? And has anyone had success doing so. Thanks!
Hi,
Did u ever found a fix for that problem?
Nope. Sometimes it just does and after a while it clears up 😕
Are you sure the HWF running DBX isn't overloaded on CPU or RAM? I would start there.
Yes, DBX app is the only thing this host runs. There are no users or indexing or searching done on this box. We have < 10% CPU usage (average) and < 5 GB of ram usaged (average out of a total of 15 GB available).