Getting Data In

Why is the Indexer ignoring my timezone settings?

Builder

Hi,

I've got a problem that's driving me crazy. There is a source we're reading via a universal forwarder that is the output of a syslog on a whole bunch of servers. This means that some of the lines represent servers in different timezones depending on the host. Yeah, I know, not so great, but it's not within our control or influence.

I have been creating [host::] stanzas in a props.conf on our indexer cluster master and setting the TZ per host, such as "TZ = America/New York". If I go to one of the indexers and

splunk btool props list --debug

I can see the host entries I made.

However, the events are still being indexed as if they are the local time of the indexer. The sourcetype here is 'syslog' but I know that "host::" should override the sourcetype stanza in props.conf. I hunted around for a "source::" stanza that I might not know about that matches and I can't find one anywhere.

I'm not sure where to go from here, but any help would be appreciated. I hope I'm missing something obvious...

Thanks

0 Karma
1 Solution

Explorer

Since you have multiple hosts writing to one file, I'm going to assume you're using a transform to parse the hostnames and then your thought is that it will apply your time zone setting by that host. The problem is the time is set prior to transforms running, so the hostname is parsed after the timestap has already been completed by Splunk, and by that point it is too late.

A workaround could be to either separate the hosts by time one into different files (this could be achieved by using a different port for each TZ). Then you can configure the TZ per file.

Another workaround would be to modify your syslog config to write each host to a different directory by adding the host variable to the path. Then you can use a path segment to set the host rather than a transform, and then your hostname:: props will work.,

View solution in original post

0 Karma

SplunkTrust
SplunkTrust

Hi you mentioned

“I have been creating [host::] stanzas in a props.conf on our indexer cluster master and setting the TZ per host”

So I have to ask, did you put the props in an app in $splunkhome/etc/master-apps and then apply the cluster bundle or are you making these edits to $splunkhome/etc/system/local/props.conf?

The former should work as long as you’re not collecting data with a heavy forwarder OR forwarding through a heavy forwarded (aka intermediate forwarder).

0 Karma

Builder

Data is coming from a universal forwarder to an index cluster member. Yes, I made one "app" under master-apps and made it specifically for timezone setting and nothing else.

No heavy forwarder involved.

As it wasn't working, I did the 'btool' noted above on an indexer to confirm that those settings were getting to the indexer(s).

0 Karma

Influencer

HI,

just a thought, did you maybe renamed the host via transforms?

Splunk doesn't repeat over props.conf after changing the host with a transforms. So you cannot do index time stuff like TZ using the host you set by transforms. You need to do it on the original host or source or sourcetype.

0 Karma

Builder

Interesting thought, but unfortunately, no. There are a LOT of these IPs. I'm using MetaWoot and trying to clean up the glut of latency issues. We've never added this many hosts to any config file, let alone transforms. (Ooops).

0 Karma

Motivator

If you are using universal forwarders you can use a (undocumented) feature which sets the timezone of a input on the UF level:

 [your_input_stanza]
 _tzhint = America/New_York

This may do the trick for you

0 Karma

Explorer

Since you have multiple hosts writing to one file, I'm going to assume you're using a transform to parse the hostnames and then your thought is that it will apply your time zone setting by that host. The problem is the time is set prior to transforms running, so the hostname is parsed after the timestap has already been completed by Splunk, and by that point it is too late.

A workaround could be to either separate the hosts by time one into different files (this could be achieved by using a different port for each TZ). Then you can configure the TZ per file.

Another workaround would be to modify your syslog config to write each host to a different directory by adding the host variable to the path. Then you can use a path segment to set the host rather than a transform, and then your hostname:: props will work.,

View solution in original post

0 Karma

Builder

Hmm. No, not doing anything with transforms.

On the UF, the monitor stanza looks like:

[monitor:///var/something/something/]
sourcetype = syslog
whitelist = /syslog_\d{8}$
recursive = true
index = index_for_that_syslog_stuff
ignoreOlderThan = 90d

So the sourcetype is getting set there.

I started wondering how it actually does figure out the hostname. Rather than say, using the local hostname where the source file lives.

I did not write this stuff, but the UF deployment also has a props.conf and a transforms.conf. The props.conf is for other sourcetypes, none of which would match syslog and then of course, nothing in transforms would match that either. And really, because this isn't the parsing layer, I think a lot of this is just ignored anyway.

This leads me to think that something is grabbing that syslog sourcetype and picking out the field after the date and taking it as the hostname.

FYI, the filename is .../syslog_YYYYMMDD
I looked on the indexer and the closest things that match either the sourcetype or the source are

[delayedrule::syslog] -- built-in but should only be called if the sourcetype isn't set
[source::....syslog] -- from SplunkTAnix's props.conf, doesn't match and doesn't set TZ
[source::.../syslog(.\d+)?] -- built-in, doesn't match filename
[syslog] -- from SplunkTAnix's props.conf, doesn't set TZ and host:: should take precendence, no?

[syslog] does have some host-related transforms, but they're all "REPORT"

I was primarily looking for a source:: stanza that might override any host:: TZ I'd set. It's my assumption that all 3 (source, host, sourcetype) are merged in that sequence such that any TZ settings I had for host are then merged "down" to sourcetype.

Still rather befuddled. The btool output definitely shows me the correct TZ for that host in props.conf on one of the indexers.

Thanks

0 Karma

Explorer

The only way it could be parsing the multiple different hostnames from that file is by using transforms. So it's not working for one of two reasons:

  1. You're not parsing the hostname so your hostname stanzas don't applying
  2. You are parsing them and the timestamp is already set before that transform is read.

Since your sourcetype is "syslog" that is a pre-trained sourcetype that Splunk comes with host extraction transforms for, and those are how the hostname is being extracted. The problem is timestamp extraction always comes before transforms.

Bottom line is that the parsing multiple hostnames from an individual file and then trying to configure TZ will not work because the timestamp will always be extracted before the hostname is parsed.

This configuration would never work, regardless of any source or sourcetype stanzas. No matter what stanzas may or may not exist, the timestamp will always be extracted before the host transform is run.

0 Karma

Builder

(Not disagreeing, just trying to understand better...)

I think I understand your point, but I wish I could at least figure out the path this takes to set that.

Since the hostname is getting somehow and being picked out as the field right after the timezone, it must be getting that from somewhere. I just can't figure out where.

If I'm not parsing the hostname directly, wouldn't it then be setting the hostname to the local host on which these events came from rather than from the event itself?

Thanks

0 Karma

Explorer

The hostname is being parsed using the system/default props and transforms:

system/default/props.conf
[syslog]
TRANSFORMS = syslog-host

system/default/transforms.conf
[syslog-host]
DEST_KEY = MetaData:Host
REGEX = :\d\d\s+(?:\d+\s+|(?:user|daemon|local.?).\w+\s+)*[?(\w[\w.-]{2,})]?\s
FORMAT = host::$1

As you mentioned, the host is initially set as the local host of your UF they came from rather than from the event itself, at that point ALL events have the same host. Then, timestamp is extracted. Finally, transforms are run, the above host transform runs, and host is extracted...but it happens after the timestamp has already been extracted.

The point is, this will never work. You can never extract the host before extracting timestamp. So, you will need to use a different method like one of the two I suggested in my initial post.

0 Karma

Builder

Got it. Sadly.

Thanks very much for the education there. I'll see if we can agree to change how this is written out.

Thanks

0 Karma

SplunkTrust
SplunkTrust

Please share the props.conf settings for one of your [host::] stanzas.

---
If this reply helps you, an upvote would be appreciated.
0 Karma

Builder

They’re all just as simple as

[host::1.2.3.4]
TZ = America/New_York

and

[host::2.3.4.5]
TZ = Europe/London

And so on.

And yes, whoever setup the rsyslogd on the server where this file gets written to does not look for the FQDN and thus all the host names are the IP’s 😕

Thanks

0 Karma