Splunk Search

How to modify Time Zone in Splunk?

SplunkDash
Motivator

Hello,

We have a few types of logs generated with different time zones. Are there any ways SPLUNK can modify the time zones associated with the logs entries to a one time zone (EST) so we can map all logs to one time zone.

DS Logs:             2021-07-28 16:57:00,526 GMT

Security Logs:     2021-07-28 16:15:49,430 EST

Audit Logs :   Wed 2021 May 28, 16:58:11:430

Any recommendations will be highly appreciated. Thank you!

Labels (1)
Tags (1)
0 Karma
1 Solution

yuanliu
SplunkTrust
SplunkTrust

I think the objective needs two distinct actions to achieve.

  1. Make  sure that all data sources are ingested with correct internal time.
  2. Present data in Eastern Standard time.

Let's start  from the second one.   Note, I ignored the word "convert", which is very different from "present".  This is  because Splunk does not use timestamps internally.  As such, _time carries no timezone.  You can present _time in any timezone you desire.  Usually the user can set preferences. (See Set the time zone for a user's search results.)  Alternatively, you can force presentation using functions like strftime().

Now to the first.  Splunk uses various tactics to best decipher timestamp in the input.  For example, it will automatically recognize "2021-07-28 16:57:00,526 GMT" as 1627491420.526000, "2021-07-28 16:15:49,430 EST" as 1627506949.430000. (These epoch representations assume UTC aka GMT.)  For these two, you generally don't  have to worry.

The problem can arise in "Wed 2021 May 28, 16:58:11:430" because this log doesn't come with timezone info.   For Splunk to obtain the correct time, your indexer must use the same timezone as machines that produce these logs.  In this scenario, Splunk's internal representation will always be the actual epoch time.

If the indexer runs on a different timezone from the source machines, e.g., the indexer is running on UTC but the source machines are running EST (-5), Splunk will interpret "Wed 2021 May 28, 16:58:11:430" as 1622246291.430000 instead of the correct 1622228291.430000.

If all source machines run on the same timezone, you can rectify this problem by setting TZ on the indexer.  If source machines themselves run on varying timezones, you will need to set forwarders' TZ on source machines.

Does this make sense?

View solution in original post

yuanliu
SplunkTrust
SplunkTrust

What is the end goal?   If ingestion is working correctly, all of them would be numeric in epoch UTC (GMT).   The original time zone will not matter.

SplunkDash
Motivator

Hello,

Thank you so much for you quick response. The goal is all of them want to be numeric in epoch UTC (GMT).

0 Karma

yuanliu
SplunkTrust
SplunkTrust

Internally, _time field should already be in epoch.  Can you illustrate how it is different for different data sources in your instance?

When _time is used as a table head (including in stats), Splunk displays it in human readable form, potentially influenced by user preference but doesn't change its value or internal representation.  You can still perform calculations such as _time - 3600, _time + 86400, and so on.  If simultaneous events from two sources end up with different _time values due to the sources' differing time zones, it is likely a problem in ingestion.  You'll need to adjust ingestion to take time zone into account.

SplunkDash
Motivator

Hello,

Thank you so much again. I tried to use TZ=US/Eastern in porps.conf files, do you think it will address the issue? Thank you again!

0 Karma

yuanliu
SplunkTrust
SplunkTrust

It is hard to say which remedy will work without knowing the cause of the problem.  Under TZTimestamp extraction configuration contains this quote

  * If the event has a timezone in its raw text (for example, UTC, -08:00),
  use that.
  * If TZ is set to a valid timezone string, use that.
  * ...

Among log  samples, only Audit log is missing a valid timezone string.  If Audit log is the one giving trouble, (you can use something like "| eval timediff = _indextime - _time | stats avg(timediff) by sourcetype" to test) setting TZ in that source type should help.

SplunkDash
Motivator

Hello,

Thank you so much again.

We have different logs with different timestamps (please see below), my objective is to configure SPLUNK that would allow us to convert all timestamp in to EasternTime. Thank you again.

DS Logs:             2021-07-28 16:57:00,526 GMT

Security Logs:     2021-07-28 16:15:49,430 EST

Audit Logs :   Wed 2021 May 28, 16:58:11:430

 

0 Karma

yuanliu
SplunkTrust
SplunkTrust

I think the objective needs two distinct actions to achieve.

  1. Make  sure that all data sources are ingested with correct internal time.
  2. Present data in Eastern Standard time.

Let's start  from the second one.   Note, I ignored the word "convert", which is very different from "present".  This is  because Splunk does not use timestamps internally.  As such, _time carries no timezone.  You can present _time in any timezone you desire.  Usually the user can set preferences. (See Set the time zone for a user's search results.)  Alternatively, you can force presentation using functions like strftime().

Now to the first.  Splunk uses various tactics to best decipher timestamp in the input.  For example, it will automatically recognize "2021-07-28 16:57:00,526 GMT" as 1627491420.526000, "2021-07-28 16:15:49,430 EST" as 1627506949.430000. (These epoch representations assume UTC aka GMT.)  For these two, you generally don't  have to worry.

The problem can arise in "Wed 2021 May 28, 16:58:11:430" because this log doesn't come with timezone info.   For Splunk to obtain the correct time, your indexer must use the same timezone as machines that produce these logs.  In this scenario, Splunk's internal representation will always be the actual epoch time.

If the indexer runs on a different timezone from the source machines, e.g., the indexer is running on UTC but the source machines are running EST (-5), Splunk will interpret "Wed 2021 May 28, 16:58:11:430" as 1622246291.430000 instead of the correct 1622228291.430000.

If all source machines run on the same timezone, you can rectify this problem by setting TZ on the indexer.  If source machines themselves run on varying timezones, you will need to set forwarders' TZ on source machines.

Does this make sense?

SplunkDash
Motivator

Hello @yuanliu ,

How would I set forwarder TZ to Eastern Time? Where I need to make changes in forwarder? Any help would be appreciated. Thank you so much again.

 

0 Karma

yuanliu
SplunkTrust
SplunkTrust

Have you tried to set TZ in props.conf?

Tags (1)

SplunkDash
Motivator

Hello @yuanliu.

Include following stanza in  props.conf  at $FORWARDER_HOME/etc/system/local/ 

[default]

TZ = US/Eastern

0 Karma

yuanliu
SplunkTrust
SplunkTrust

Does this mean it works?  Is your index server running on a different timezone?

SplunkDash
Motivator

Hello @yuanliu ,

Yes, index is running in different time zone. I haven't implemented. I need to reach out to client to let me change/update props.conf file in their machine. I wanted to make sure if it will work. Thank you so much.

0 Karma

yuanliu
SplunkTrust
SplunkTrust

Yeah, it's tough to rely on client to conduct test to confirm a solution.  If at all possible, set up a bunch of local test machines running on varying TZs; you can use a test index for such purposes. (If your network allows, you can even run a VM on your laptop to forward into the indexer, or even run forwarder on your own laptop for which you can temporarily change timezone.)  Another possible quick test is to set TZ on indexer to a zone that is most populous among forwarders; obviously this is not good if the index is already in production use.

Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...