Splunk Enterprise

Why are we receiving this Ingestion Latency error after updating to 8.2.1?

Marc_Williams
New Member

So we just updated to 8.2.1 and we are now getting an Ingestion Latency error…

How do we correct it? Here is what the link says and then we have an option to view the last 50 messages...

 Ingestion Latency

  • Root Cause(s):
    • Events from tracker.log have not been seen for the last 6529 seconds, which is more than the red threshold (210 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked.
    • Events from tracker.log are delayed for 9658 seconds, which is more than the red threshold (180 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked.
  • Generate Diag?If filing a support case, click here to generate a diag.

Here are some examples of what is shown as the messages:

  • 07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\spool\splunk.
  • 07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\run\splunk\search_telemetry.
  • 07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\log\watchdog.
  • 07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\log\splunk.
  • 07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\log\introspection.
  • 07-01-2021 09:28:52.275 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\etc\splunk.version.

07-01-2021 09:28:52.269 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\CrushFTP9\CrushFTP.log.

  • 07-01-2021 09:28:52.268 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\watchdog\watchdog.log*.
  • 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\splunk\splunk_instrumentation_cloud.log*.
  • 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\splunk\license_usage_summary.log.
  • 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\splunk.
  • 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\introspection.
  • 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\etc\splunk.version.
  • 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk\tracker.log*.
  • 07-01-2021 09:28:52.266 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk\...stash_new.
  • 07-01-2021 09:28:52.266 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk\...stash_hec.
  • 07-01-2021 09:28:52.266 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk.
  • 07-01-2021 09:28:52.265 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\run\splunk\search_telemetry\*search_telemetry.json.
  • 07-01-2021 09:28:52.265 -0500 INFO TailingProcessor [66180 MainTailingThread] - TailWatcher initializing...
Labels (1)
Tags (2)
0 Karma

sombhtr239
Explorer

Anyone having solution please help

0 Karma

sombhtr239
Explorer

I am also facing the same problem.  Server IOPS is 2000, still getting IOWAIT and ingesting latency error very frequently and then it goes away.

0 Karma

Marc_Williams
New Member

So we upgraded to 8.2.2.1 and are still getting the error. However it is a bit different than before.

  • Events from tracker.log have not been seen for the last 1395 seconds, which is more than the red threshold (210 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked.
0 Karma

salbro
Path Finder

Also seeing this issue after moving from 8.1.2 to 8.2.2. We are using older hardware, but this makes me think it is not necessarily related. It comes and goes throughout the day.

0 Karma

apietersen
Communicator

Same here, on Splunk Ent. v8.2.2

0 Karma

Funderburg78
Path Finder

I am also having this issue but only one one of 6 splunk servers.  The other Splunk servers do not have a tracker.log.  This log is not listed in: https://docs.splunk.com/Documentation/Splunk/8.2.2/Troubleshooting/Enabledebuglogging#log-local.cfg as a splunk log so I wonder if it has something to be done with the upgrade.  

It has been 1 week since my upgrade and this is the only server complaining.  Would really like to know what this log is and why it is having issues.  I checked file permissions and it is the same as the other logs.... 

 

This log is in /var/spool/splunk and is a default to be monitored in the /splunk/etc/system/default/inputs.con and is listed as a latency tracker.  of my 6 servers only the search head running ES even has this log in the director y

0 Karma

JeLangley
New Member

I am going to reach out to support when I get a chance and will update here when I have found a solution/workaround of some sort.  My OS is Linux and the log path/permission looks fine from my perspective as well.  We upgraded over a month ago and this issue persists but only on our indexer.  Our heavy forwarders are not affected by this.

0 Karma

kisstian
Explorer

Have you heard back from support regarding this issue?  We have been running on 8.2.2 for several weeks without issue, but today noticed this on one of the search heads within the SHC. 

0 Karma

JeLangley
New Member

I am having this issue as well.  Would appreciate any information you've been able to dig up.

0 Karma

justynap_ldz
Path Finder

Hi Marc,

We are facing the same issue after 8.2.1 upgrade
Have you already found a solution?

Greetings,
Justyna

 

0 Karma

Marc_Williams
New Member

No....I have not found a solution. However it appears to have cleared itself.

0 Karma

Marc_Williams
New Member

So we thought we had it resolved. However it is back again.

We restart the services and we can watch it go from good to bad.

Anyone else had luck finding an answer?

0 Karma

yukiang
Observer

me too looking for a solution to address this ingestion latency....

0 Karma

PeteAve
Engager

We had this problem after upgrading to v8.2.3 and have found a solution.

After disabling the SplunkUniversal Forwarder, the SplunkLightForwarder and the SplunkForwarder on splunkdev01, the system returned to normal operation. These apps were enabled on the Indexer and should have been disabled by default. Also when trying to load a UniversalForwarder that is not compatible to v8.2.3, it will cause ingestion latency and tailreader errors. We had some Solaris 5.1 servers (forwarders) that are no longer compatible with upgrades so we just kept them on 8.0.5. The upgrade requires Solaris 11 or more.

The first thing I did was go to the web interface, Manage Apps and searched *forward*.

This showed the three Forwarders that I needed to disable and I disabled them on the interface.

I also  typed these commands in unix on the indexer:

splunk disable app SplunkForwarder -auth <username>:<password>
splunk disable app SplunkLight -auth <username>:<password>
splunk disable app SplunkUniversalForwarder -auth <username>:<password>

After doing these things the ingestion latency and tailreader errors stopped.

phil__tanner
Path Finder

FWIW, we just upgraded from 8.1.3 to 8.2.5 tonight, and are facing exactly these same issues.

Only difference is that these forwarder apps are already disabled on our instance.

Is there any update from Splunk support on this issue?

0 Karma

dpalmer235
Observer

We upgraded from 8.7.1 to 8.2.6 and we have the same tracker.log latency issue.

Please help us SPLUNK...

0 Karma
*NEW* Splunk Love Promo!
Snag a $25 Visa Gift Card for Giving Your Review!

It's another Splunk Love Special! For a limited time, you can review one of our select Splunk products through Gartner Peer Insights and receive a $25 Visa gift card!

Review:





Or Learn More in Our Blog >>