Getting Data In

How do you adjust time with syslog-ng or universal forwarder for devices in different time zones?

oleg106
Explorer

Hi,

We are centralizing and collecting logs from various devices via syslog-ng, and sending them to indexers via universal forwarder on the same box. The issue I am running into is that some devices in Asia and other places are on local time zones, so logs are stored in future time.

What's a good way to deal with this without changing the local time zone. Perhaps store log time in another field and convert _time to EST? Just curious if anyone found a good way to deal with this. Thanks in advance!

Tags (1)
0 Karma
1 Solution

FrankVl
Ultra Champion

Several options really:

  1. Get your source devices to log in a standard zone (but you already ruled out that option, which is indeed often a difficult thing to achieve).

  2. Get your source devices to report their time zone as part of the event. Really depends on whether or not your source device supports that.

  3. As @somesoni2 mentioned: configure per source type / source / host time zone through props.conf. But with many hosts this can become very hard to manage (depending also on how dynamic the set of source devices is).

  4. Use regional syslog servers, such that the syslog server (and UF) are in same zone as the source device. But I know from experience that even with regional collection points you may still have to deal with exceptions.

  5. Get your syslog daemon to overwrite the original time stamp with a time stamp of when the event was received on the syslog server. Bit of a last resort option as it obviously reduces the accuracy of the time stamp. But it may be a better solution than having to deal with many timezone exceptions.

  6. Configure Splunk to use the file modify time or the index time to populate _time, rather than the time stamp from the event. Same drawbacks as 5.

View solution in original post

FrankVl
Ultra Champion

Several options really:

  1. Get your source devices to log in a standard zone (but you already ruled out that option, which is indeed often a difficult thing to achieve).

  2. Get your source devices to report their time zone as part of the event. Really depends on whether or not your source device supports that.

  3. As @somesoni2 mentioned: configure per source type / source / host time zone through props.conf. But with many hosts this can become very hard to manage (depending also on how dynamic the set of source devices is).

  4. Use regional syslog servers, such that the syslog server (and UF) are in same zone as the source device. But I know from experience that even with regional collection points you may still have to deal with exceptions.

  5. Get your syslog daemon to overwrite the original time stamp with a time stamp of when the event was received on the syslog server. Bit of a last resort option as it obviously reduces the accuracy of the time stamp. But it may be a better solution than having to deal with many timezone exceptions.

  6. Configure Splunk to use the file modify time or the index time to populate _time, rather than the time stamp from the event. Same drawbacks as 5.

somesoni2
Revered Legend

You can specify the timezone to be used in props.conf on the universal forwarder. This can be done for a particular sourcetype, host or source (wildcard supported for host/source). See this for more information.

https://docs.splunk.com/Documentation/Splunk/7.1.3/Data/Applytimezoneoffsetstotimestamps#Specify_tim...

0 Karma

TonyLeeVT
Builder

Unfortunately, this will not work because the Universal Forwarder will ignore props.conf. Per the link you shared:

"If you have Splunk Enterprise and need to modify timestamp extraction, perform the configuration on your indexer machines or, if forwarding data, use heavy forwarders and perform the configuration on the machines where the heavy forwarders run. If you have Splunk Cloud and need to modify timestamp extraction, use heavy forwarder and perform the configuration on the machines where the heavy forwarders run."

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...