Hi all,
One of our external db inputs stopped receiving data. We use dbconnect app, that's what is written about problem input in /opt/splunk/var/log/splunk/dbx.log:
monsch1:INFO:Scheduler - Execution of input=[dbmon-tail://database/dbtable] finished in duration=188 ms with resultCount=0 success=false continueMonitoring=false
Seems this could start after our splunk was shutted down for several hours.
So now we need to fill index with missing data. How can we do this without rebuiding index?
Connection to database is valid and I can view db results using dbquery command.
inputs.conf:
[dbmon-tail://database/dbtable]
host = xxxxxxx
index = prepayment
interval = 40 * * * *
output.format = kv
output.timestamp = 1
output.timestamp.column = create_ts
output.timestamp.format = yyyy-MM-dd HH:mm:ss.SSSZ
table = "public".dbtable
tail.rising.column = create_ts
Other inputs that continue working have similar entries in inputs.conf.
Thanks in advance
dbmon-tail mode should leverage the rising column (shown as "create_ts") in order to find the events which have been written to the database since the last record. The only time you would encounter an issue is if the database is purging rows, and cleared out rows before the dbmon-tail had a chance to see them. There is a state file in $SPLUNK_HOME/var somewhere that records the last value of the rising column that dbmon-tail has seen. It should indicate the last time stamp (create_ts) that was seen by DB connect, and resume from there.
Not sure what you mean by your question. Splunk has no control over whether or not events are purged from the database. I was simply suggesting that if events are being removed from the DB, Splunk might not have seen them if the connector was down for a while. It will pick up from "events after the most recent create_ts", but Splunk can't know whether or not there were gaps.
and if our database hasn't been noticed in purging rows then it's a bug?