Hello,
I'm troubleshooting a possible problems with the dbconnect app. We setup a dbinput that indexing with 90 second frequency, raising column by time. We have to index these data fairly frequently for monitoring.
At around 4PM, monitoring team told me that data has stop indexing, I check indexing log and found no bug
The log said input_mode=tail events=0, and repeat in 30 minutes until we have normal index log with rising column check point again.
I checked with SQL and the we do have data in between those time.So I want to pinpoint the root problems so we don't encounter this again.
Is this the problems with networking, oracleBD, or is this from splunk (I highly doubt it because I don't do anything and it continue indexing 30 min after)
Hi
if I understood right this start to work again and indexed all "missing" data without any intervention on splunk side? If so then it's quite obviously that the issue was somewhere else like network, DB or something else.
Could there be e.g. some maintenance jobs on DB side which set exclusive lock for those rows?
r. Ismo
But the thing is, we can index data from another db on the same server ip just fine, only that schema and view have that problem. And I just confirm with the DB team and there's no problem on their side.
Do Splunk have log somewhere that can indicate or pinpoint what the problem is?
It's for prevent it for the future, and I don't want to sound petty, but I don't want people pin this problems on splunk and me.
When you have installed DBX on SH side you can use it’s monitoring views to get information about how it works. It also write it’s own log files under …/logs/splunk.