Getting Data In

DB Connect 1: Why is the timezone not appearing correctly?

edwardrose
Contributor

Hello All

I have an issue with the TZ not appearing correctly. I have two different inputs coming in and both have the following:

[root@splk-srch-01 local]# more props.conf 
[event_data]
TZ = GMT

[user_data]
TZ = GMT

So any searches within the last hour compared to all time seem to be correct for sourcetype event_data. But if you do the same searches for user_data the time stamps seem wrong.

Last 24 hours search

All Time search

0 Karma
1 Solution

woodcock
Esteemed Legend

I assume that all the "account" sources are sourcetype=user_data and all the "event" sources are sourcetype=event_data. This is a mess. To answer just your first question, the search results are showing me that indexers splk-idx-03.wv.mentorg.com and splk-idx-01.wv.mentorg.com are not using the GMT setting so either the file did not go out to them or they have not been restarted. I know this because instead of 0, I see local for date_zone so that is the first thing to fix. However, all the values of lagSecs which are negative indicate a major timestamping problem because these are being indexed as "happened in the future" which is impossible. This second problem is what is really causing you to misinterpret the first problem. Because your misconfigurations (some of which still exist) have thrust so many events "to the future", when you search for "last 15 minutes", you are not only seeing events that you have recently indexed, but you are also seeing events from a long time back that were mis-timestamped (and used the wrong TZ) that are just now coming into focus as "now". So you will have to readjust your methods of analyzing the impact of your configuration changes until all of the "future" data ages out. Run this search again but do it for "All Time" (I forgot to mention that part) and I can give you a better assessment.

View solution in original post

0 Karma

woodcock
Esteemed Legend

I assume that all the "account" sources are sourcetype=user_data and all the "event" sources are sourcetype=event_data. This is a mess. To answer just your first question, the search results are showing me that indexers splk-idx-03.wv.mentorg.com and splk-idx-01.wv.mentorg.com are not using the GMT setting so either the file did not go out to them or they have not been restarted. I know this because instead of 0, I see local for date_zone so that is the first thing to fix. However, all the values of lagSecs which are negative indicate a major timestamping problem because these are being indexed as "happened in the future" which is impossible. This second problem is what is really causing you to misinterpret the first problem. Because your misconfigurations (some of which still exist) have thrust so many events "to the future", when you search for "last 15 minutes", you are not only seeing events that you have recently indexed, but you are also seeing events from a long time back that were mis-timestamped (and used the wrong TZ) that are just now coming into focus as "now". So you will have to readjust your methods of analyzing the impact of your configuration changes until all of the "future" data ages out. Run this search again but do it for "All Time" (I forgot to mention that part) and I can give you a better assessment.

0 Karma

edwardrose
Contributor

The output that I put up yesterday was for all time. Are you suggesting that I change the TZ on all the indexers to use GMT?

-ed

0 Karma

woodcock
Esteemed Legend

Did you fix the TZ=GMT in props.conf on the 1 indexers and restart them? That is the first step but you have many more problems than that.

0 Karma

edwardrose
Contributor

I set the TZ=GMT in the DBX app on the search head not the indexers. So I should set the TZ=GMT on the indexers /opt/splunk/etc/system/local/props.conf as well? As none of them currently have a TZ set in that locatoin

0 Karma

woodcock
Esteemed Legend

Yes, definitely. The search results are definitive: you must make this change but it won't fix all of your problems. It will fix all of the problems that you noticed.

0 Karma

edwardrose
Contributor

Sorry about the long delay but I can report back that I made the change to the TZ on the indexers and it still did not resolve the issue.

Here are the results for the search from AllTime

_time   indextime   lagSecs date_zone   splunk_server   index   host    source
2015-07-22 15:10:34 07/22/2015 08:10:44 25190.000   0   splk-idx-02.wv.mentorg.com  lenel   svr-sql-lnl-11  dbmon-tail://Lenel_OnGuard/dbo.AccountTransactionsView
2015-07-22 15:09:08 07/22/2015 08:09:14 25194.000   0   splk-idx-01.wv.mentorg.com  lenel   svr-sql-lnl-11  dbmon-tail://Lenel_OnGuard/dbo.AccountTransactionsView
2015-07-22 14:56:31 07/22/2015 07:56:39 25192.000   0   splk-idx-03.wv.mentorg.com  lenel   svr-sql-lnl-11  dbmon-tail://Lenel_OnGuard/dbo.AccountTransactionsView
2015-07-22 08:12:43 07/22/2015 08:12:50 -7  0   splk-idx-01.wv.mentorg.com  lenel   svr-sql-lnl-11  dbmon-tail://Lenel_OnGuard/dbo.EventsView
2015-07-22 08:12:16 07/22/2015 08:12:18 -2  0   splk-idx-02.wv.mentorg.com  lenel   svr-sql-lnl-11  dbmon-tail://Lenel_OnGuard/dbo.EventsView
2015-07-22 08:11:09 07/22/2015 08:11:15 -6  0   splk-idx-03.wv.mentorg.com  lenel   svr-sql-lnl-11  dbmon-tail://Lenel_OnGuard/dbo.EventsView

It seems better as the datezone now all show 0 but the time lag is still negative for the sourcetype=event_data. Which tells me that the DB is probably causing the time issue that we are seeing.

0 Karma

woodcock
Esteemed Legend

Now that date_zone is 0, you know that the TZ portion of your problem is fixed (Splunk is treating the times as being in GMT=UTC). Since the lag is still negative, EITHER the clock on your Indexers is wrong (it is putting a bad value into _indextime) OR the thing generating the timestamps in your DB is wrong. Don't forget to "Accept" an answer to close this question.

0 Karma

woodcock
Esteemed Legend

What is the output of this search?

index=lenel | dedup date_zone splunk_server index host source | eval lagSecs=_time-_indextime | convert ctime(_indextime) as indextime| table _time indextime lagSecs date_zone splunk_server index host source
0 Karma

edwardrose
Contributor

Here is from the last 24 hrs

2015-07-09 13:31:51 07/09/2015 13:31:53 -2 0 splk-idx-01.wv.mentorg.com lenel svr-sql-lnl-11 dbmon-tail://Lenel_OnGuard/dbo.EventsView
2015-07-09 13:31:26 07/09/2015 06:31:28 25198.000 0 splk-idx-02.wv.mentorg.com lenel svr-sql-lnl-11 dbmon-tail://Lenel_OnGuard/dbo.AccountTransactionsView
2015-07-09 13:31:19 07/09/2015 13:31:21 -2 0 splk-idx-03.wv.mentorg.com lenel svr-sql-lnl-11 dbmon-tail://Lenel_OnGuard/dbo.EventsView
2015-07-09 13:30:49 07/09/2015 06:30:58 25191.000 0 splk-idx-01.wv.mentorg.com lenel svr-sql-lnl-11 dbmon-tail://Lenel_OnGuard/dbo.AccountTransactionsView
2015-07-09 13:28:22 07/09/2015 13:28:23 -1 0 splk-idx-02.wv.mentorg.com lenel svr-sql-lnl-11 dbmon-tail://Lenel_OnGuard/dbo.EventsView
2015-07-09 13:27:14 07/09/2015 06:27:17 25197.000 0 splk-idx-03.wv.mentorg.com lenel svr-sql-lnl-11 dbmon-tail://Lenel_OnGuard/dbo.AccountTransactionsView

0 Karma

edwardrose
Contributor
 100 Per Page Format  Preview
_time   indextime   lagSecs date_zone   splunk_server   index   host    source
2015-07-08 17:29:44 07/08/2015 10:29:50 25194.000   0   splk-idx-01.wv.mentorg.com  lenel   svr-sql-lnl-11  dbmon-tail://Lenel_OnGuard/dbo.AccountTransactionsView
2015-07-08 17:29:04 07/08/2015 10:29:10 25194.000   0   splk-idx-02.wv.mentorg.com  lenel   svr-sql-lnl-11  dbmon-tail://Lenel_OnGuard/dbo.AccountTransactionsView
2015-07-08 17:28:18 07/08/2015 10:28:20 25198.000   0   splk-idx-03.wv.mentorg.com  lenel   svr-sql-lnl-11  dbmon-tail://Lenel_OnGuard/dbo.AccountTransactionsView
2015-07-08 10:38:05 07/08/2015 10:38:11 -6  0   splk-idx-03.wv.mentorg.com  lenel   svr-sql-lnl-11  dbmon-tail://Lenel_OnGuard/dbo.EventsView
2015-07-08 10:37:48 07/08/2015 10:37:51 -3  0   splk-idx-02.wv.mentorg.com  lenel   svr-sql-lnl-11  dbmon-tail://Lenel_OnGuard/dbo.EventsView
2015-07-08 10:36:30 07/08/2015 10:36:40 -10 0   splk-idx-01.wv.mentorg.com  lenel   svr-sql-lnl-11  dbmon-tail://Lenel_OnGuard/dbo.EventsView
2015-06-17 14:00:18 06/18/2015 10:14:30 -72852  0   splk-idx-01.wv.mentorg.com  lenel   svr-sql-lnl-11  dbmon-tail://Lenel_OnGuard/dbo.eventsview
2015-06-17 13:28:54 06/18/2015 10:12:57 -74643  0   splk-idx-03.wv.mentorg.com  lenel   svr-sql-lnl-11  dbmon-tail://Lenel_OnGuard/dbo.eventsview
2015-06-17 13:13:59 06/18/2015 10:14:57 -75658  0   splk-idx-02.wv.mentorg.com  lenel   svr-sql-lnl-11  dbmon-tail://Lenel_OnGuard/dbo.eventsview
2015-06-17 00:00:00 06/17/2015 10:30:37 -37837  local   splk-idx-03.wv.mentorg.com  lenel   svr-sql-lnl-11  dbmon-tail://Lenel_OnGuard/dbo.AccountTransactionsView
2015-06-17 00:00:00 06/17/2015 10:49:26 -38966  local   splk-idx-01.wv.mentorg.com  lenel   svr-sql-lnl-11  dbmon-tail://Lenel_OnGuard/dbo.AccountTransactionsView
0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...