Getting Data In

DateParserVerbose - timestamp match is outside of the acceptable time window

marksnelling
Communicator

I'm getting a LOT of these warnings in my splunkd.log file but the timestamps all look correct. Can someone help me out here?


02-22-2012 17:01:12.280 +0000 WARN DateParserVerbose - A possible timestamp match (Wed Feb 22 17:01:12 2012) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context="source::/var/opt/jira/log/atlassian-jira-security.log|host::syn|atlassian_jira|remoteport::47375"
02-22-2012 17:01:28.289 +0000 WARN DateParserVerbose - A possible timestamp match (Wed Feb 22 17:01:27 2012) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context="source::/var/opt/jira/log/atlassian-jira-security.log|host::syn|atlassian_jira|remoteport::47375"
02-22-2012 17:02:12.311 +0000 WARN DateParserVerbose - A possible timestamp match (Wed Feb 22 17:02:12 2012) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context="source::/var/opt/jira/log/atlassian-jira-security.log|host::syn|atlassian_jira|remoteport::47375"
02-22-2012 17:03:43.210 +0000 INFO ExecProcessor - Ran script: /opt/splunk/etc/apps/sos/bin/lsof_sos.sh, took 260.3 milliseconds to run, 0 bytes read
02-22-2012 17:05:31.448 +0000 WARN DateParserVerbose - A possible timestamp match (Wed Feb 22 17:05:31 2012) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context="source::/var/opt/jira/log/atlassian-jira-security.log|host::syn|atlassian_jira|remoteport::47375"
02-22-2012 17:06:12.465 +0000 WARN DateParserVerbose - A possible timestamp match (Wed Feb 22 17:06:12 2012) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context="source::/var/opt/jira/log/atlassian-jira-security.log|host::syn|atlassian_jira|remoteport::47375"
02-22-2012 17:06:28.480 +0000 WARN DateParserVerbose - A possible timestamp match (Wed Feb 22 17:06:27 2012) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context="source::/var/opt/jira/log/atlassian-jira-security.log|host::syn|atlassian_jira|remoteport::47375"
02-22-2012 17:07:12.499 +0000 WARN DateParserVerbose - A possible timestamp match (Wed Feb 22 17:07:12 2012) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context="source::/var/opt/jira/log/atlassian-jira-security.log|host::syn|atlassian_jira|remoteport::47375"

Labels (1)
Tags (2)
1 Solution

marksnelling
Communicator

This was actually being caused by some buggy disk replication software, CA ARCserve. It wasn't letting the last-modified attribute on the log file be updated when an entry was written.

CA fixed the bug and this problem has gone away.

View solution in original post

0 Karma

marksnelling
Communicator

This was actually being caused by some buggy disk replication software, CA ARCserve. It wasn't letting the last-modified attribute on the log file be updated when an entry was written.

CA fixed the bug and this problem has gone away.

0 Karma

tmeader
Contributor

Until Splunk does something to fix this issue either on the Forwarders or with MAX_DAYS_HENCE itself, we're just using this quick hack which at least gets rid of the messages (if your MAX_DAYS_HENCE is still set to the default of "2"):

1) Create a small batch file with just:

net stop "SplunkForwarder"
net start "SplunkForwarder"

2) Schedule this to run with Windows Scheduler once a day.

You'll be missing the few seconds of events during the restart period, so for a mission critical box, this isn't a great solution. But this is a hack we're willing to deal with for the time being.

tmeader
Contributor

You sure a restart of the Splunk Forwarder wouldn't have accomplished the same thing? That's the behavior that we noticed. Just curious.

0 Karma

mslvrstn
Communicator

For us it was a filesystem issue.
Even though the file was constantly being appended to (splunkd.log) the modification timestamp was not being updated.
Being the consumate Windows SysAdmins that we are, we rebooted the box and it fixed itself. Fortunately, this was a QA system.

0 Karma

yannK
Splunk Employee
Splunk Employee

The root cause of this problem is that MAX_DAYS_HENCE is not always anchored on the current time : the case of file monitor inputs it is anchored on the modification time of the source file.

Currently, this is not explicitly described in the splunkd.log WARN message (shows only current time and event time), but we are looking into changing that. The props.conf.spec will be modified as well to document this behavior.

This will be a most pro-eminent problem on Windows, where appending new content to a file without closing and opening again typically does not update the file's modtime.

When files are not rotated every few days (example : the splunkd.log of a forwarder which sees little activity), the modtime eventually becomes older (in days) than the value of MAX_DAYS_HENCE (2 by default), then the time stamp of newly added events clashes with MAX_DAYS_HENCE and those error messages begin to appear.

Possible solutions:

  • Increase MAX_DAYS_HENCE in props.conf to tolerate the discrepancy between the file modtime and the time stamp extracted from its events.
  • Use an external mechanism to routinely update the modtime of the affected file(s) to avoid violating MAX_DAYS_HENCE.

In addition to documenting this behavior, we are currently evaluating configuration or code changes needed to prevent Splunk internal files from triggering these errors.

jdthiele
Engager

I have been battling this for weeks. With the help of this post, I finally discovered that the log files received from a vendor do not have a modification time and a simple "touch $FILE" on Linux enables a nice clean ingest and timestamp parsing. Woohoo! THANKS!

0 Karma

marksnelling
Communicator

All our clocks are in sync using NTP. If this is caused by the mod time not being updated then that is probably out of our control?
I also agree that having a huge MAX_DAYS_HENCE value is not a fix.
Perhaps a flag not to have MAX_DAYS_HENCE check the file mod time?

0 Karma

tmeader
Contributor

Is this a bug that plans on getting fixed (either by making sure that the splunkd.log gets its modified timestamp updated properly, or have MAX_DAYS_HENCE stop checking the mod time)? I'd rather not have a gigantic unnecessary value for MAX_DAYS_HENCE (since the forwarder often would run for weeks to months with no restart, and thus I assume no updated file mod time) just to take care of these warning messages. Alternatively, we could setup the Windows hosts to restart the Splunk Fwder service as a scheduled task daily, which would then update the mod time as well (would rather avoid this).

0 Karma

mslvrstn
Communicator

OK, that would seem to explain it.
Now, the only-tangentially-Splunk-related issue is:
Why does splunkd.log on my Windows UniversalForwarder not get it's file modification time updated as new log lines are written?

kst
Explorer

We went from 4.2.3 to 4.3 about a month prior to this, on the
indexer that is having the issue. Vast majority of the forwarders
are still on 4.2.x.
I've also opened a call with support.

0 Karma

mslvrstn
Communicator

Mine are also all new 4.2 installs.

0 Karma

marksnelling
Communicator

No, we're evaluating the product so no support contract.

0 Karma

marksnelling
Communicator

No, mine have all been fresh installs of the latest (4.3) version.

0 Karma

tmeader
Contributor

In any of the cases above, have you migrated from an older version of Splunk to your current instance? Or were they relatively new installs? In our case, we've migrated all the way up from 2.x (taking the directed steps each time, of course)... so support seems to think that might have something to do with it.

Please chime in, as support is now watching this thread.

Thanks.

0 Karma

kst
Explorer

Seeing the same thing here -- splunkd.log FILLS with this warning, from
a variety of forwarders, to our v4.3 indexer. I'm seeing both "possible timestamp
match (Tue mar 6 14:50:43 2012)" and "possible timestamp match (03-06-2012)"
(note the differing timestamp formats) from various hosts that are forwarding to
this indexer.

As the others have said, the timestamp it is complaining about is within a second
of the error timestamp itself.

0 Karma

tmeader
Contributor

Again, please open a ticket if possible. In our case, it is also a Universal Forward on a (well, on 6 different) Windows box that seems to be reponsible for these WARN messages. Likewise, they're coming from the splunkd log on the Forwarders.

Thanks.

0 Karma

mslvrstn
Communicator

I'm seeing the same thing. What looks really odd to me about the timestamp it complains about, in both yours and mine, is that the year is in an odd location. Your example:

A possible timestamp match (Wed Feb 22 17:01:12 2012) is outside of the acceptable time window

Having the time splitting up the day/year doesn't look like anything parsed by datetime.xml

In my case, the line it is processing is actually from splunkd.log from a windows forwarder, so it's not a timestamp under my control. Nor do I really even see a timestamp of this format in the source file.

03-07-2012 06:34:18.235 +0000 WARN  DateParserVerbose - A possible timestamp match (Wed Mar  7 06:34:05 2012) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE.  Context="source::C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log|host::winplay-dc1|splunkd|remoteport::56124"
host=lxsplunk-qc1  Options|  sourcetype=splunkd   Options|  source=/opt/splunk/var/log/splunk/splunkd.log   Options|  splunk_server=lxsplunk-qc1   Options

Perhaps this will given someone at Splunk an insight into where to look further.
For reference, I'm on 4.2.2

0 Karma

tmeader
Contributor

If you have support with Splunk, can you please call and open a case? We've seen the exact same issue, but so far Splunk has been unable to replicate it in their testing. Perhaps if they examine your setup as well, they can finally reproduce it. You can reference them to case #74595.

Thanks.

0 Karma

marksnelling
Communicator

I'm surprised that no one else is having this problem. I've reinstalled Splunk in this evaluation setup a number of times and always get this. It's not specific to the sourcetype in the above example either, it happens with linux_secure etc.

0 Karma

MarioM
Motivator

you may want to increase the MAX_DAYS_HENCE :Timestamp attributes

Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...