- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Why are we receiving this ingestion latency error after updating to 8.2.1?
So we just updated to 8.2.1 and we are now getting an Ingestion Latency error…
How do we correct it? Here is what the link says and then we have an option to view the last 50 messages...
Ingestion Latency
- Root Cause(s):
- Events from tracker.log have not been seen for the last 6529 seconds, which is more than the red threshold (210 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked.
- Events from tracker.log are delayed for 9658 seconds, which is more than the red threshold (180 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked.
- Generate Diag?If filing a support case, click here to generate a diag.
Here are some examples of what is shown as the messages:
- 07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\spool\splunk.
- 07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\run\splunk\search_telemetry.
- 07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\log\watchdog.
- 07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\log\splunk.
- 07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\log\introspection.
- 07-01-2021 09:28:52.275 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\etc\splunk.version.
07-01-2021 09:28:52.269 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\CrushFTP9\CrushFTP.log.
- 07-01-2021 09:28:52.268 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\watchdog\watchdog.log*.
- 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\splunk\splunk_instrumentation_cloud.log*.
- 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\splunk\license_usage_summary.log.
- 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\splunk.
- 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\introspection.
- 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\etc\splunk.version.
- 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk\tracker.log*.
- 07-01-2021 09:28:52.266 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk\...stash_new.
- 07-01-2021 09:28:52.266 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk\...stash_hec.
- 07-01-2021 09:28:52.266 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk.
- 07-01-2021 09:28:52.265 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\run\splunk\search_telemetry\*search_telemetry.json.
- 07-01-2021 09:28:52.265 -0500 INFO TailingProcessor [66180 MainTailingThread] - TailWatcher initializing...
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Is this bug this an ongoing issue???? We have upgraded to version 9.3.1 and receives Forwarder Ingestion Latency message. Stating "Root Cause(s) Indicator "ingestion_latency_gap_multipilier' exceeded configured value. The observed value is 1362418. Unhealthy instances: Indexer3.
If this bug is still ongoing, can someone please "post the workaround"??
Thanks in advance!!
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I believe the solution is to disable the feature:
Create a health.conf entry in /opt/splunk/etc/system/local on the affected machines being sure to restart splunk after the entry is made.
[feature:ingestion_latency]
alert.disabled = 1
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks PeteAve! I'll try that and see what happens.....
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi team, someone with de solution , i have updated to last version, 9.1.1, and gess what? it has the same error
Indicator 'ingestion_latency_gap_multiplier' exceeded configured value
Anyon that some solution?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I had this issue after updating from 8.2.1 to 8.2.10. It was found that there were two monitors for tracker.log so I disabled the one I found in $SPLUNK_HOME/etc/system/local/inputs.conf:
[monitor://$SPLUNK_HOME\var\spool\splunk\tracker.log*]
disabled = 1
After Splunk restarted, there were no issues. I hope this helps someone.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
is there a solution to fix this? my version already upgrade to 9.0.3.
can i modify the limits.conf to limit thrput?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hello All,
This is a known bug in Splunk and we are working to address. Please use the following work around in the interim.
Create a health.conf entry in /opt/splunk/etc/system/local on the affected machines being sure to restart splunk after the entry is made.
[feature:ingestion_latency]
alert.disabled = 1
Let me know if you have any questions or concerns
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
could you tell me about a real workaroud, this one is only disabling report, great thanks in advance...
Best regards
__
Philipp from France
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Has the bug been resolved in Splunk Enterprise version 9.2.1 (latest version)?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Still suffering the same in 9.10.2. Is the bug already fixed or do we still need to apply this fix to silence the message?
Apart from that does this disable ingestion latency warnings and do we need to monitor it another way?
Can you shed some light on this @jswann_splunk ?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Good morning,
If this is a known bug than why is it not listed or addressed as a known issue under the latest release?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Does this workaround is indicated to version 9.x?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

I got this same bahaviour on my updated Splunk 8.2.6.
I encountered the problem was an inputs.conf with a stanza with
[monitor://$SPLUNK_HOME/var/run]
I really do not remember why i had this inputs, maybe in older 7.x.x i usedit for some "special" ingestion.
Removing the inputs in 8.2.6, removed the error in ui and tracker.log was regular in splunkd.log by default.
👨🔧
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
A bit late but posting since I haven't seen this info anywhere yet. I had a support case open for similar symptoms since going to 8.2.6. I had already taken extensive steps to rule out legitimate IO saturation and did not feel comfortable adjusting the threshold of the indicator because of potential false negatives. The tl;dr in my case was that it is a known issue that is fixed in the 9.0.1 release.
2022-07-14 | SPL-225807, SPL-219749 | Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. |
Being unsatisfied with the issue description not being precise enough, I kept probing the support engineer until I got sufficient explanation that it would be applicable.
The way ingestion latency is detected is that tracker.log file gets generated on the server periodically in $SPLUNK_HOME/var/spool/splunk. It will contain a dummy event with a timestamp that is pulled from system now time. That dummy event is used to generate metrics that are used in the health indicator reports and are logged to internal indexes. This would be the most reliable way to detect indexing latency. Apparently there was a bug in the code that calculates the latency that is documented to be fixed in the above issue. I watched and inspected the tracker.log files as they were being generated and quickly got bored, but never saw any timestamp that was inaccurate. So I'll take Splunk's word that the issue should be fixed in the latest release for now.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
i am still recieving the same issues on 9.1.1 forwarder and splunk enterprise as of 1406 EST 10/27/2023. are you as well?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes. My problem actually grew worse with higher latency numbers since 9.1.1
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
My Splunk Enterprise version on cluster is 9.0.0.1 and I am also pacing this problem:
Ingestion Latency
Root Cause(s):
Events from tracker.log are delayed for 48517 seconds, which is more than the red threshold (180 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked.
.....
Unhealthy Instances:
search-head-02
If anyone can solve this problem. Please help us!
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I think you would have to disable autoBatch or upgrade to 9.0.1 (not 9.0.0.1).
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
More parallelIngestPipelines at the Indexer seem to help (but not fix), at least fewer messages appear now. Will be watching.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I see the solution is to turn off useAck, but I can't do that with AWS ingestion, so this is not a good work around to this issue.
