Monitoring Splunk

Has anyone seen this Error message: Monotonic time source didn't increase; is it stuck?

tywhite
Explorer

Since we've upgraded to 7.0 we're seeing this particular error show up in the logs:

10-17-2017 11:30:30.772 -0600 ERROR PipelineComponent - Monotonic time source didn't increase; is it stuck?

We weren't able to find much information regarding this error online and wanted to poll the audience to see if anyone has encountered this as well.

uona
Observer

Got same error

Splunk Enterprise
Version: 8.0.2
Build: a7f645ddaf91

06-01-2020 13:04:41.446 -0400 ERROR PipelineComponent - Monotonic time source didn't increase; is it stuck?

0 Karma

stefanghita
Engager

I had the same question and I opened a Splunk case. This is the response:

"This is an error we have come across with some of our Windows customers, and seems more common of virtualized instances. The splunk process will periodically check the time of the OS system and will show this error if there is a difference (~15 ms) as an indication of the time progress internally. This is really an internal ERROR that should not be reported.

Reference: GetTickCount64 function https://docs.microsoft.com/en-gb/windows/win32/api/sysinfoapi/nf-sysinfoapi-gettickcount64

This issue is currently fixed in version 8.0.0, and if you would like to stop this error from occurring, you will need to look into upgrading to 8.0, otherwise, you can ignore this error message.​"

esix_splunk
Splunk Employee
Splunk Employee

Whats your time stamp look like for that data source? Typically this would be with a timestamp that isnt recognized or something wrong with your data source...

0 Karma

tobais
Engager

I have this error on one heavy forwarder but not the other, all pulling the same configurations? So the datasources are the same on each environment, but only one throws this message followed by:

WARN TcpOutputProc - Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group splunkcloud has been blocked for 39750 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.

0 Karma

jwinderDDS
Path Finder

How do I go about identifying which source is having an issue? Looking at the splunkd.log it isn't obvious.

Thank you in advance,

Jeremy

tywhite
Explorer

The timestamp shown in the error I posted is directly from the splunkd.log file.

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...