Splunk Enterprise

Why are we receiving this ingestion latency error after updating to 8.2.1?

Marc_Williams
Explorer

So we just updated to 8.2.1 and we are now getting an Ingestion Latency error…

How do we correct it? Here is what the link says and then we have an option to view the last 50 messages...

 Ingestion Latency

  • Root Cause(s):
    • Events from tracker.log have not been seen for the last 6529 seconds, which is more than the red threshold (210 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked.
    • Events from tracker.log are delayed for 9658 seconds, which is more than the red threshold (180 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked.
  • Generate Diag?If filing a support case, click here to generate a diag.

Here are some examples of what is shown as the messages:

  • 07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\spool\splunk.
  • 07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\run\splunk\search_telemetry.
  • 07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\log\watchdog.
  • 07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\log\splunk.
  • 07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\log\introspection.
  • 07-01-2021 09:28:52.275 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\etc\splunk.version.

07-01-2021 09:28:52.269 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\CrushFTP9\CrushFTP.log.

  • 07-01-2021 09:28:52.268 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\watchdog\watchdog.log*.
  • 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\splunk\splunk_instrumentation_cloud.log*.
  • 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\splunk\license_usage_summary.log.
  • 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\splunk.
  • 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\introspection.
  • 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\etc\splunk.version.
  • 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk\tracker.log*.
  • 07-01-2021 09:28:52.266 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk\...stash_new.
  • 07-01-2021 09:28:52.266 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk\...stash_hec.
  • 07-01-2021 09:28:52.266 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk.
  • 07-01-2021 09:28:52.265 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\run\splunk\search_telemetry\*search_telemetry.json.
  • 07-01-2021 09:28:52.265 -0500 INFO TailingProcessor [66180 MainTailingThread] - TailWatcher initializing...
Labels (1)
Tags (2)

linhmai_bne
Path Finder

I got similar issue after upgrading 8.2.7. I have tried to set:

useAck=false

disable app Splunk...Forwarders

chown -R splunk:splunk /opt/splunk

but the problem is still there.

0 Karma

tyates_ctm
Explorer

TL;DR: check `server` in `[tcpout:]` in `outputs.conf` of the server (not UFs)

I got this error after migrating onto bigger servers. The cause was the `server` attribute in the `[tcpout:]` stanza in `outputs.conf` on the various members of the cluster hadn't been updated. I have no idea why, but at some point over the past 5 years that same attribute on the UFs had been pointed at different DNS records, so the indexers were receiving the important data from across the estate.

Hope this helps someone.

0 Karma

Gregski11
Contributor

we have a case open on this as well, I will keep you posted on the resolution

we see stuff like this, and then they just mysteriously go away and a few days later they return, we are on version 9.0.0

  • Root Cause(s):
    • Events from tracker.log have not been seen for the last 1394 seconds, which is more than the red threshold (210 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked.
0 Karma

Gregski11
Contributor

 we are getting the same error on our Cluster Master and it's running version 9.0.0

  • Root Cause(s):
    • Events from tracker.log are delayed for 44 seconds, which is more than the yellow threshold (15 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked.

 

we also opened a support case with Splunk will keep you all up to date on how it unfolds

0 Karma

jdcabanglan
Loves-to-Learn Lots

Did you fix the issue?

0 Karma

Zacknoid
Explorer

Upgraded to version 9.0 facing similar issue : Root Cause(s) Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. did you find out any solution for this ?? 

 

Thanks 

 

Gregski11
Contributor

@Zacknoid wrote:

Upgraded to version 9.0 facing similar issue : Root Cause(s) Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. did you find out any solution for this ?? 

 

Thanks 

 


no but after a day or two the problem just went away 

0 Karma

Zacknoid
Explorer

Still looking for resolution, ingestion latency error 

0 Karma

sombhtr239
Explorer

Anyone having solution please help

0 Karma

sombhtr239
Explorer

I am also facing the same problem.  Server IOPS is 2000, still getting IOWAIT and ingesting latency error very frequently and then it goes away.

0 Karma

Kathir
Loves-to-Learn Everything
 
0 Karma

Kathir
Loves-to-Learn Everything

Indicator 'ingestion_latency_gap_multiplier' exceeded configured value 

 

i am also getting 

0 Karma

Marc_Williams
Explorer

So we upgraded to 8.2.2.1 and are still getting the error. However it is a bit different than before.

  • Events from tracker.log have not been seen for the last 1395 seconds, which is more than the red threshold (210 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked.
0 Karma

salbro
Path Finder

Also seeing this issue after moving from 8.1.2 to 8.2.2. We are using older hardware, but this makes me think it is not necessarily related. It comes and goes throughout the day.

0 Karma

apietersen
Contributor

Same here, on Splunk Ent. v8.2.2

0 Karma

Funderburg78
Path Finder

I am also having this issue but only one one of 6 splunk servers.  The other Splunk servers do not have a tracker.log.  This log is not listed in: https://docs.splunk.com/Documentation/Splunk/8.2.2/Troubleshooting/Enabledebuglogging#log-local.cfg as a splunk log so I wonder if it has something to be done with the upgrade.  

It has been 1 week since my upgrade and this is the only server complaining.  Would really like to know what this log is and why it is having issues.  I checked file permissions and it is the same as the other logs.... 

 

This log is in /var/spool/splunk and is a default to be monitored in the /splunk/etc/system/default/inputs.con and is listed as a latency tracker.  of my 6 servers only the search head running ES even has this log in the director y

0 Karma

JeLangley
Engager

I am going to reach out to support when I get a chance and will update here when I have found a solution/workaround of some sort.  My OS is Linux and the log path/permission looks fine from my perspective as well.  We upgraded over a month ago and this issue persists but only on our indexer.  Our heavy forwarders are not affected by this.

0 Karma

kisstian
Explorer

Have you heard back from support regarding this issue?  We have been running on 8.2.2 for several weeks without issue, but today noticed this on one of the search heads within the SHC. 

0 Karma

JeLangley
Engager

My apologies, we actually redeployed for a separate issue we were facing so I never did contact them on this.  

0 Karma

JeLangley
Engager

I am having this issue as well.  Would appreciate any information you've been able to dig up.

0 Karma

justynap_ldz
Path Finder

Hi Marc,

We are facing the same issue after 8.2.1 upgrade
Have you already found a solution?

Greetings,
Justyna

 

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...