Dashboards & Visualizations

"Event timezone" compared to "actual timezone"

tlmayes
Contributor

Apologies in advance if this is answered somewhere and I didn't find it. Searched Google and "Answers" for every combination but no luck.

We have a need to ensure and constantly validate that our event times and host timezone settings are correct. I understand that props.conf and other Splunk processes will normalize both time and timezones of indexed events, but in all cases, the event time data is always unchanged and is determined by the application generating the event, or Splunk.

A small example: we have suite of servers physically located in Korea. The events are showing an offset of +7 (420 minutes), yet Korea is actually +9. Nothing wrong with Splunk, and not aware that a Splunk config will resolve this in any way. Is a host/application problem, yes?

I am trying to write a query for a dashboard that will: compare "event" date_zone offset from index=_internal with the publically known offset (or a lookup) and produce an output to a dashboard when UF's in a particular known domain (ex: korea.company.com) are not reporting correctly so that we can notify the host owner to correct this issue.

Thanks in advance

0 Karma
1 Solution

lguinn2
Legend

When a Splunk indexer ingests an event, it calculates the timestamp of the event. The basic process is explained in How timestamp assingment works and other places, but let me give you a more specific outline.

First - does the incoming raw event have a timestamp that includes a timezone? This is always the best case. If possible, configure your logging software to include an offset. A timestamp like [02/Jun/2016:15:11:51 +0900] is always better than [02/Jun/2016:15:11:51]
If an offset (or timezone abbreviation) appears in the timestamp, Splunk will use it.

Second, is a TZ set for the host (or sourcetype or source) in a props.conf file on the indexer? If not, this may be where you need to set a timezone to ensure that the data is indexed correctly.

Third - did the data come to the indexer via a Splunk universal forwarder (UF)? And was the version of the UF 6.0 or greater? 6.x Ufs send local OS timezone information to the indexer, alongside the raw data. If the UF provides a local timezone, the indexer will use that - unless a TZ is specified in props.conf. So a possible reason for the problem might be if the operating system does not have the correct timezone setting - in that case, you could simply fix the timezone on the server, and probably would not need to set props.conf.

If the indexer has no information about the timezone from any of the sources above, it uses its own local OS timezone. I prefer for all my Splunk indexers to have the underlying OS timezone set for UTC. That way, I am sure that no indexer will parse timestamps differently than the rest.

Finally - the timestamp is always stored in the Splunk index in UTC. And the timestamp is always displayed in the timezone of the Splunk user's choice. So be certain that your user (or you) are not being confused by the user timezone setting. This can be set by clicking on the user name in GUI; the timezone display is one of the settings users can control, along with their password and a few other things.

Hopefully, this wil help you figure out where things are going wrong for your Korean servers!

View solution in original post

lguinn2
Legend

When a Splunk indexer ingests an event, it calculates the timestamp of the event. The basic process is explained in How timestamp assingment works and other places, but let me give you a more specific outline.

First - does the incoming raw event have a timestamp that includes a timezone? This is always the best case. If possible, configure your logging software to include an offset. A timestamp like [02/Jun/2016:15:11:51 +0900] is always better than [02/Jun/2016:15:11:51]
If an offset (or timezone abbreviation) appears in the timestamp, Splunk will use it.

Second, is a TZ set for the host (or sourcetype or source) in a props.conf file on the indexer? If not, this may be where you need to set a timezone to ensure that the data is indexed correctly.

Third - did the data come to the indexer via a Splunk universal forwarder (UF)? And was the version of the UF 6.0 or greater? 6.x Ufs send local OS timezone information to the indexer, alongside the raw data. If the UF provides a local timezone, the indexer will use that - unless a TZ is specified in props.conf. So a possible reason for the problem might be if the operating system does not have the correct timezone setting - in that case, you could simply fix the timezone on the server, and probably would not need to set props.conf.

If the indexer has no information about the timezone from any of the sources above, it uses its own local OS timezone. I prefer for all my Splunk indexers to have the underlying OS timezone set for UTC. That way, I am sure that no indexer will parse timestamps differently than the rest.

Finally - the timestamp is always stored in the Splunk index in UTC. And the timestamp is always displayed in the timezone of the Splunk user's choice. So be certain that your user (or you) are not being confused by the user timezone setting. This can be set by clicking on the user name in GUI; the timezone display is one of the settings users can control, along with their password and a few other things.

Hopefully, this wil help you figure out where things are going wrong for your Korean servers!

CryoHydra
Path Finder

The timezone setting in a user's account will effect the timestamp shown in events. ?

Answer : True

_time - has influence from splunk UI logon account timezone ?

0 Karma

tlmayes
Contributor

Iguinn, thanks for the response. Will try and address your great questions

  1. Fully understand (i believe) the timestamp process. We do not currently have any issue with anything Splunk, unless I have missed something. I may be looking for a silver bullet, where none exists.
  2. Regarding incoming raw events, I am using and focusing on "table=_internal" as the benchmark for each reporting UF, since I fully believe this to be the most accurate representation of the UF hosting servers clock. That being said, yes, the "date_zone" is being reported as +0700. No abbreviations appear anywhere, which is expected.
  3. To the question of: Is a TZ set for the host? We do not have direct access to any of the current 300 hosts, so TZ settings are out of our control unless within Splunk. None currently appear anywhere.
  4. Yes, as stated in #2, I am using _internal, and everything is v. 6.4.4.

I understand that we can fix the TZ setting using props.conf, but isn't it true that this will have no effect on the original RAW event logs generated by applications external to Splunk, but indexed by Splunk? Two issues I am trying to overcome/address. Here were are concerned with the forensics aspect of the event, i.e. the time within the raw event must agree with the host. Where no time exists within the raw event, then there is no issue
1. First: Validate that the TZ or date_zone agrees with the physical location. Will likely end up writing a query that compares the _internal events with a lookup table.

2. Second: Identify when the "original" event time (Splunk cannot modify this, correct?) is incorrect (same as #1), so that we can have the server owner set their TZ/offset correctly.

Regarding your stated preference to use UTC, I fully concur, but we do not have control of the servers, only Splunk (and even then only via Apps). No login capability.

Finally, to your last question: No problem with this understand, that Splunk displays time per my login setting.

Anybody else out there have to deal with this?

0 Karma

lguinn2
Legend

PS. If the event data actually contains [02/Jun/2016:15:11:51 +0700] when it should be [02/Jun/2016:15:11:51 +0900]

Then you can use the TIME_FORMAT in props.conf to tell Splunk exactly to parse the timestamp. Just make sure that your TIME_FORMAT excludes the +0700 part of the event, and make sure to set TZ correctly.

Get Updates on the Splunk Community!

What's New in Splunk Enterprise 9.4: Features to Power Your Digital Resilience

Hey Splunky People! We are excited to share the latest updates in Splunk Enterprise 9.4. In this release we ...

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...

SignalFlow: What? Why? How?

What is SignalFlow? Splunk Observability Cloud’s analytics engine, SignalFlow, opens up a world of in-depth ...