- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
Some of the forwarder installations are behaving strangely.
They take an hour for the data to be indexed and displayed in Splunk Web. Additionally, the timestamp is offset by 60 minutes.
For most Splunk forwarders, the data is displayed in Splunk Web almost simultaneously. The times there also match.
Reinstalling the affected forwarders did not help.
Do you have a solution?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

As far as I remember, if your timestamp contains a timezone information and it is properly parsed from the timestamp, the TZ setting is not used. And rightly so! After all you're specifying the point in time unambigously so why should Splunk try second-guessing you by adjusting it by artificially added TZ information?
If your sources report "the same" timestamp in two different timezones they are effectively reporting two different timestamps and Splunk behaviour is correct. You should fix your sources to report the same timestamp (referencing to the same "absolute" timestamp after TZ-based correction is applied).
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

If the "delay" is consistent and seems to be rounded to full hours (in some cases smaller subdivisions but that's rare) it's usually the case with timezone problems. There can be multiple causes for this:
1) The source might be reporting no timezone information or even a wrong one.
2) The sourcetype might not be properly configured for timestamp recognition at all
3) The sourcetype might not assign proper timezone in case there is no timezone information in the original events.
So it all depends on details of your particular case. You haven't provided too many details so we can't tell which one it is.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The source is the mysql error logfile.
The sourcetype is the splunk native "mysqld_error".
I have 12 database servers with universal splunkforwarder indexing the mysql error logfile
7 servers working fine (instant indexing and correct tz)
5 servers having the same problem.
inputs.conf
[default]
host = MYSQL01
[monitor:///dblog/errorlog/mysql-error.log]
disabled = false
sourcetype = mysqld_error
index = mysql-errorlog
props.conf
[monitor:///dblog/errorlog/mysql-error.log]
LEARN_SOURCETYPE = false
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Ok. Two things.
1. Your props.conf stanza is wrong. monitor: is a type of input and you're not supposed to use it outside inputs.conf. Props should contain stanzas for sourcetype, source or host.
2. mysqld_error is one of the builtin sourcetypes. Unfortunately, it doesn't contain any timestamp recognition settings so Splunk tries to guess. Firstly I'd check if all hosts report the timestamps in events in a consistent manner.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I found the reason for the problem.
MySQL v5.7 uses system timezone → 2025-04-29T11:42:01.532704+01:00
MySQL v8.0 uses system timezone → 2025-04-29T11:42:01.532704+02:00
I can't explain the difference because the timestamps are specified the same in both versions
Anyways, I tried to fix this, by setting the timezones by TZ= in props.conf of forwarder and indexer.
But no success 😞
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

As far as I remember, if your timestamp contains a timezone information and it is properly parsed from the timestamp, the TZ setting is not used. And rightly so! After all you're specifying the point in time unambigously so why should Splunk try second-guessing you by adjusting it by artificially added TZ information?
If your sources report "the same" timestamp in two different timezones they are effectively reporting two different timestamps and Splunk behaviour is correct. You should fix your sources to report the same timestamp (referencing to the same "absolute" timestamp after TZ-based correction is applied).
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Giuseppe,
the server is connected to a ntp server.
There are no timezone-settings in the splunk configurations file $Splunk_Home/etc/system/local.
Adding TZ = Europe/Berlin in props.conf doesn't solve the problem.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content


Hi @chrisitanmoleck ,
did you try to configure the Default Timezone for your User (in Account Setings)?
Ciao.
Giuseppe
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content


