Getting Data In

How to configure timezone offset for log data coming via DBX

pjb2160
Path Finder

Hello,

I have a data source, an Oracle View, providing log data via the DBX application which uses epoch time for its time stamps. Naturally this is GMT and I need to apply an +11 hour offset.

I have read a number of Splunk docs and Splunk Answers but am quite confused. It appears I need to just add something as simple as the following to a props.conf file:

[oracle_datasource]
TZ=Australia/Sydney

however I'm unclear as to which exact props.conf file. Does it need to be on the Indexer, or Search Head? Or, as this is via the DBX app do I need to apply some configuration changes to the inputs.conf file in the ... /dbx/local/ folder?

Many thanks,
Paul

(p.s. sorry for the forward slashes! The backslash escape doesn't seem to work here, i.e. 2 x backslashes still don't render in the preview pane)

Tags (3)
1 Solution

pjb2160
Path Finder

So... it appears I have been somewhat mistaken! In my original post I said the DBX Application uses epoch time for its time stamps and that it was GMT. Wrong(ish)! It uses UTC time not GMT (see here for the differences: http://www.timeanddate.com/time/gmt-utc-time.html) Also, what was happening is the Oracle View had the CREATION_DATE datatype set to a datetimestamp which was converted by DBX to an epoch time (I had incorrectly assumed it to be that the Oracle view recorded the timestamp as an epoch time).

I made numerous attempts to resolve this, not limited to but including:

  • trying to apply the TZ offset via the various props.conf files (this did not work, I could see no affect)
  • rewriting the SQL within DBX to create a new column which applied the time zone offset (this did not work, the new column was not created)
  • deleting and recreating the database input exactly as it was before (this did not work)

Interestingly, when editing the below line in the DBX inputs.conf file (mostly just to see that it would have an affect) the value in the database input field within the DBX GUI ("Manage Database Inputs") did not change:

output.timestamp.format = yyyy-MM-dd'T'HH:mm:ss.SSSZ

In the end, mostly to deal with another problem (logs were no longer being indexed) I deleted the database input again and re-created it but did not check the "Output timestamp" box and didn't set any value value in the "Timestamp column" or "Timestamp format" fields...

Voila, all of a sudden the logs were being indexed again and the time zone offset was being applied!!! In addition, the "CREATION_DATE" which was not appearing in my event data, was now included.

Interestingly, this is still not perfect (am I expecting too much???). The CREATION_DATE when converted from an epoch time, to a human readable one, does not exactly match _time (_indextime is typically > than _time by < 1 second). It does not match _indextime and although there is one more time stamp field in the data this is exactly the same as the CREATION_DATE so I know _time is not inadvertently picking up this other time stamp.

I can live with this but would naturally prefer perfection. Any thoughts on what is informing _time??

Many, many thanks,
P

View solution in original post

pjb2160
Path Finder

So... it appears I have been somewhat mistaken! In my original post I said the DBX Application uses epoch time for its time stamps and that it was GMT. Wrong(ish)! It uses UTC time not GMT (see here for the differences: http://www.timeanddate.com/time/gmt-utc-time.html) Also, what was happening is the Oracle View had the CREATION_DATE datatype set to a datetimestamp which was converted by DBX to an epoch time (I had incorrectly assumed it to be that the Oracle view recorded the timestamp as an epoch time).

I made numerous attempts to resolve this, not limited to but including:

  • trying to apply the TZ offset via the various props.conf files (this did not work, I could see no affect)
  • rewriting the SQL within DBX to create a new column which applied the time zone offset (this did not work, the new column was not created)
  • deleting and recreating the database input exactly as it was before (this did not work)

Interestingly, when editing the below line in the DBX inputs.conf file (mostly just to see that it would have an affect) the value in the database input field within the DBX GUI ("Manage Database Inputs") did not change:

output.timestamp.format = yyyy-MM-dd'T'HH:mm:ss.SSSZ

In the end, mostly to deal with another problem (logs were no longer being indexed) I deleted the database input again and re-created it but did not check the "Output timestamp" box and didn't set any value value in the "Timestamp column" or "Timestamp format" fields...

Voila, all of a sudden the logs were being indexed again and the time zone offset was being applied!!! In addition, the "CREATION_DATE" which was not appearing in my event data, was now included.

Interestingly, this is still not perfect (am I expecting too much???). The CREATION_DATE when converted from an epoch time, to a human readable one, does not exactly match _time (_indextime is typically > than _time by < 1 second). It does not match _indextime and although there is one more time stamp field in the data this is exactly the same as the CREATION_DATE so I know _time is not inadvertently picking up this other time stamp.

I can live with this but would naturally prefer perfection. Any thoughts on what is informing _time??

Many, many thanks,
P

pjb2160
Path Finder

Oh, and in relation to the TZ offset, I forgot to mention I think this must be being picked up by my local user settings. The offset is +10 hours though and it should be +11 for an Australia/Sydney timezone. Interestingly there is no Australia/Sydney timezone to select in my local user settings...

0 Karma

agoodall_splunk
Splunk Employee
Splunk Employee

Australia/Hobart should give you +11 and be inline with Sydney.

0 Karma

pjb2160
Path Finder

Agreed, however Australia/Hobart? That's pretty odd? No offence to anyone from Hobart of course but Hobart is undoubtedly the least significant of the Australian capital cities and the absence of Sydney seems particularly strange as Sydney is often, incorrectly, referred to as the national capital... I would hope Splunk review this at some point as there are can be some peculiar timezone offsets for daylight savings time along the Australian east coast.

0 Karma

phoenixdigital
Builder

It looks like that is the right props.conf config setting.

Try it on the server that is running the DBX app first then if that doesnt work add it to the indexer.

Ensure also that the inputs.conf is marking the relevant data as sourcetype=oracle_datasource

0 Karma

pjb2160
Path Finder

Hello, and thank you for responding to my post, it seems I may need to provide more information. At this point, the inputs.conf file in the DBX app has the following key lines:

[dbmon-tail://LogSourceX/LogSourceX]
...
output.format = mkv
output.timestamp = 1
output.timestamp.column = CREATION_DATE
output.timestamp.format = yyyy-MM-dd'T'HH:mm:ss.SSSZ
...
sourcetype = oracle_datasource
...

I have applied the following update to the props.conf file in the search app on the same server (search head) where DBX app is installed:

[oracle_datasource]
TZ=Australia/Sydney

Currently, the log output for an epoch value of "1424353872.000" is rendering in Splunk as "2015-02-20T03:58:44.000+1100". This is exactly the same as it was prior to applying these changes.

I have subsequently applied the same change (TZ=...) to the props.conf on the indexer and alas, nothing there either (still rendering ""2015-02-20T03:58:44.000+1100")

I would have thought by applying the TZ=... line in props.conf it would have rendered as "2015-02-20T16:58:44.000". Is it possibly I need to apply some TIME_FORMAT setting or something?

Any additional thoughts? (Thanks again for your response!)

Cheers,
P

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...