I have DBConnect V2 running on SPlunk 6.3.1 and it was working fine until the new year. All records were indexing correctly pre 01/01/2016 00:00:00 but since then they are now indexing against 01/01/2015.
I was alerted to this fact by a lack of records in the index and when i ran the following, I discovered them in 01/01/2015
index=app_agg source=Oracle sourcetype=LUW_VARIABLES_HOME latest=-10mon@mon | eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S") | where indextime>"2015-12-31 23:59:00"
Any help would be most appreciated.
I have also tried to create a completely new identity, connections and dbinput but it still puts it in to 01/01/2015.
I have tried that and still get the same problem.
I also cloned the connection and changed the epoch time to the start of the new year but again no change.
Restart splunk servers that have dbconnect installed and let us know if that fixes it. If not, we can dig into the code.
What DBConnect command is creating index=app_agg source=Oracle sourcetype=LUW_VARIABLES_HOME latest=-10mon@mon
?
There's a statement or table or something somewhere right? Something that populates this index... is it "hard coding" the year?
Sorry this is how I found the records. The oracle table has the correct date and time showing (checked using Toad) and even the records in Splunk have the right epoch time value (1451606479000) but for some reason Splunk has indexed them against 01/01/2015. This is happening on two separate connections to 2 applications on different databases but they both use epoch time stamps.
Thanks for all the updates. I'll looke into the code and report back if I see why it's happening. I should note I'm not the developer or anything. I just enjoy debugging.
After speaking with an Accumli consultant we found that if we manually went in and changed the inputs.conf file from output_timestamp_format = YYYY-MM-dd HH:mm:ss (which was generated automatically) to output_timestamp_format = yyyy-MM-dd HH:mm:ss this fixed the issue.
Something for a future release perhaps.