We have 2 Search Heads configured to leverage pooling. And we are seeing duplicated events generated from both sides.
As you can see below, the same database input is ran on both Search Heads, at 1 minute apart. This database input is configured to run every 4 hours. As a result, we see a lot of duplicated events.
dbx.log on SH 1:
2014-08-01 14:20:19.558 monsch1:INFO:Scheduler - Execution of input=[dbmon-tail://mssql_db1/OrderDetails_audit] finished in duration=31302 ms with resultCount=12418 success=true continueMonitoring=true
dbx.log on SH 2:
2014-08-01 14:21:41.479 monsch1:INFO:Scheduler - Execution of input=[dbmon-tail://mssql_db1/OrderDetails_audit] finished in duration=10360 ms with resultCount=12422 success=true continueMonitoring=true
This is an expected behavior for implementing db connect app in a search head pooling environment, and with dbmon-tail inputs setup. The problem is that each splunk search head has it's own persistent-storage to keep track of the last rising column value. And most likely that value is different on each search head; thus causing the duplicate events to be indexed.
For dbmon-tail inputs, you should stand-up a dedicated heavy forwarder with db connect app running, and forwards the data to the indexers.
This is an expected behavior for implementing db connect app in a search head pooling environment, and with dbmon-tail inputs setup. The problem is that each splunk search head has it's own persistent-storage to keep track of the last rising column value. And most likely that value is different on each search head; thus causing the duplicate events to be indexed.
For dbmon-tail inputs, you should stand-up a dedicated heavy forwarder with db connect app running, and forwards the data to the indexers.
Thanks for the info!