I am using DBConnect to fetch two timestamps from an Oracle database table, let's call them TS1 and TS2, having the following values (TS1 is the time used for indexing the data, whereas TS2 is the Rising Column😞
02-JUN-2015 06:05:16 02-JUN-2015 10:21:14**
On my local Development system (a Windows system with Splunk version 6.2), the above data is indexed as it is, i.e.
TS1 = 02-JUN-2015 06:05:16, TS2 = 02-JUN-2015 10:21:14
However, on an Integration Testing Server (a Linux server with Splunk version 6.1.5), the same data is indexed as:
TS1 = 02-JUN-2015 06:05:16, TS2 = 02-JUN-2015 **11:21:14**
As we can see, only TS2 is off by 1 hour on the Linux server.
I am not sure why there is this difference between the way Splunk is indexing the Data. I have not made any props.conf changes to any of these two servers related to Time Zone configurations.
The Windows server is running on UTC with Daylight Saving Time enabled, and the Linux server is in BST.
Could someone please let me know what could be the reason behind this behaviour of the Linux server (Splunk version 6.1.5).
Further to this, we created a simple java program that connects to the database and prints a single record. We executed this program from both our local Development system and on the Integration Testing server. We used a simple resultSet.getTimeStamp function on the result. We noticed that both the systems are returning the same value for each of the datetime attributes. Ex: "2015-06-18 09:48:35.0"
The issue is only when we execute the query from DBConnect, where it seems to convert the results returned into epoch format and that is where it seems to be adding an hour. Even when we use DBConnect, the results are fine when we apply the to_char function on the datetime attribute. For. e.g, select update_time, to_char(update_time,'dd-mon-yyyy hh24:mi:ss') from tablename returns the string representation of the attribute perfectly fine, but adds one hour to the first column (epoch format) and this addition is only happening on the integration server.
Forgive me for not noticing this earlier (you said it clearly enough) but your problem is not with
TS1 (which Splunk is using for
timestamping and setting the
_time value for each event) but with
TS2 which is a
non-timestamp time that is inside the data. If that is really the case then what you are saying makes absolutely no sense at all.
First of all, Splunk will never (without some serious configuration work on your part) re-write data inside an event; it is always preserved, indexed, and passed back to searches as it was when it was received/forwarded.
Secondly, all of Splunk's
TZ settings apply ONLY to the
timesteamp (i.e. to
The only way that what you are saying makes sense is if the generator of the data (a combination of
DB Connect and your DB) is behaving differently when you connect from Dev vs. when you connect from Integration. I will admit that I am light on my
DB Connect experience but I do know that
v2 is very different from previous versions so have you upgraded to
Another thing to check is to ensure that your
SQL user, your
SQL string, and your
DB Connection is identical between the 2 servers. I have a strong suspicion that this is not the case and that whatever difference you find will be the root cause of the discrepancy.
Thats my point. When default is set Splunk will use the system Time/ Timezone. BST is one hour ahead of UTC so thats why your data shows up as off. In your profile set the TZ to GMT or what ever timezone you want. Splunk will automatically apply search offsets.
Splunk did a somewhat nasty thing in Splunk v6(dot???? I don't know which version for sure): They changed the precedence rules for
props.conf and this may be the problem. In v5 and earlier (and maybe some versions of v6), The only way to set
TZ was to have a configuration on the indexer (or, if a Heavy Forwarder, on the forwarder). In v6(?) this was changed and Splunk will honor any
TZ setting from any forwarder (Universal, Light, Heavy) and this will be propagated internally to the indexers and this will OVERRIDE any setting that was deployed to the Indexers. This is a HUGE change and could cause a TOTALLY different
TZ behavior just by upgrading the version of Splunk you are running and changing NOTHING else.
On both of these servers, we have a single instance of Splunk that does the data capturing, indexing and searching. We do not have separate instances for Forwarders, the TZ configuration would not have been overridden is what I feel.