All Apps and Add-ons

splunk db connect v1 tail input stopped sending data to indexers.



We have search head clustering and index clustering (not multisite) enabled on our environment.
We have splunk db connect v1 installed on our deployment server and the deployment server: /apps/splunk/etc/system/local/outputs.conf has

server=indexer1:9997, indexer2:9997, indexer3:9997, indexer4:9997, indexer5:9997, indexer6:9997
autoLB = true
useACK = true

We have a db connect input (tail) setup on the deployment server which is set to send event to a specific index. It was working till yesterday and suddenly not sending any data to the index.
I have verified that /apps/splunk/var/lib/splunk/persistentstorage/dbx/6ef870be8f52c4ff7a8c4d303e193ce4/state.xml is being updated regularly with new rising column value. however events are not getting updated in indexers.

something i found in the /Splunk/splunk/var/log/splunk/metrics.log of indexers are,

05-03-2017 08:09:46.654 -0400 INFO  Metrics - group=per_source_thruput, series="dbmon-tail://xxxxx/xxxxxx", kbps=60.373011, eps=23.032196, kb=1871.568359, ev=714, avg_age=0.577031, *max_age=2* **LAST EVENT REACHED INDEXER**
05-04-2017 07:44:59.204 -0400 INFO  Metrics - group=per_source_thruput, series="dbmon-tail://xxxxx/xxxxxx", kbps=41.061727, eps=12.677511, kb=1272.904297, ev=393, avg_age=166736675.603053, *max_age=170644749*

The max_age is very big value after the LAST EVENT REACHED INDEXER

Can any one please help?

Simon Mandy

Tags (2)
0 Karma

Ultra Champion

Did this turn out to be a bug or something? Any solution discovered?

0 Karma

Ultra Champion

Make sure to include details on any recent changes to the environment.
Validate if the DS is sending ANY data to the indexers.

Also, make sure the DS is not over-committed as it takes most of its resources for DS activity. As such, you might consider a Data Collection tier which is a collection of Forwarders (Heavy or Universal - depending on needs) dedicated for this type of activity.

0 Karma


Have you looked at dbx.log and splunkd.log on the dbx server?

If this reply helps you, an upvote would be appreciated.
0 Karma


yes i had, i couldn't find anything wrong there.

2017-05-04 09:11:29.474 dbx123:INFO:TailDatabaseMonitor - Executing database monitor=[dbmon-tail://xxxxx/xxxxx1_new_new]
2017-05-04 09:11:24.472 dbx9400:INFO:TailDatabaseMonitor - Database monitor=[dbmon-tail://xxxxx/xxxxx1_new_new] finished with status=true resultCount=294 in duration=1473 ms
2017-05-04 09:11:22.999 dbx9400:INFO:TailDatabaseMonitor - Executing database monitor=[dbmon-tail://xxxxx/xxxxx1_new_new]
2017-05-04 09:11:17.997 dbx9861:INFO:TailDatabaseMonitor - Database monitor=[dbmon-tail://xxxxx/xxxxx1_new_new] finished with status=true resultCount=380 in duration=2091 ms
2017-05-04 09:11:15.905 dbx9861:INFO:TailDatabaseMonitor - Executing database monitor=[dbmon-tail://xxxxx/xxxxx1_new_new]
2017-05-04 09:11:10.904 dbx6352:INFO:TailDatabaseMonitor - Database monitor=[dbmon-tail://xxxxx/xxxxx1_new_new] finished with status=true resultCount=327 in duration=2018 ms
2017-05-04 09:11:08.886 dbx6352:INFO:TailDatabaseMonitor - Executing database monitor=[dbmon-tail://xxxxx/xxxxx1_new_new]
2017-05-04 09:11:03.885 dbx9475:INFO:TailDatabaseMonitor - Database monitor=[dbmon-tail://xxxxx/xxxxx1_new_new] finished with status=true resultCount=293 in duration=1618 ms
0 Karma
Take the 2021 Splunk Career Survey

Help us learn about how Splunk has
impacted your career by taking the 2021 Splunk Career Survey.

Earn $50 in Amazon cash!