I am currently experiencing issue in our production environment and wanted to check if any of you have encountered similar issue. I was getting data in index using Splunk DB Connect 1 since 2 days back, but suddenly it stopped indexing data. The following is our DB Connect configuration:
We are dumping data from Oracle database to Splunk index.
Splunk versions is 6.2.2
DB connect version is 1.1.7
DB input strategy is DB Dump
Output format is Multi-Line key Format.
When I run a query in the DB Connect query browser, it returns results successfully.
There are NO errors in splunkd logs, dbx_debug shows that this dbinput triggered and completed gracefully.
Splunk DB Connect app gets data from the database and keeps in SPLUNK_HOME/var/spool/dbmon directory. From where it runs a batch job to write data to index using policy sinkhole, on successful indexing of data, it deletes the temporary file from in SPLUNK_HOME/var/spool/dbmon directory.
I have data in SPLUNK_HOME/var/spool/dbmon, but not in the index.
I also have some other indexes which are getting the data from same Oracle db using the same DB Connect 1.1.7 and they continue to work fine while only few indexes have stopped working.
General principles: if it worked and then stops, something has changed. What changed? Assuming that all the applicable logs are in Splunk, set the search window to the time when it stops working and look for errors... or even just
(* OR index=_internal) | dedup punct
to see a rough idea of what types of things are in there.