So we just updated to 8.2.1 and we are now getting an Ingestion Latency error…
How do we correct it? Here is what the link says and then we have an option to view the last 50 messages...
Ingestion Latency
Here are some examples of what is shown as the messages:
07-01-2021 09:28:52.269 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\CrushFTP9\CrushFTP.log.
we are getting the same error on our Cluster Master and it's running version 9.0.0
we also opened a support case with Splunk will keep you all up to date on how it unfolds
Upgraded to version 9.0 facing similar issue : Root Cause(s) Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. did you find out any solution for this ??
Thanks
@Zacknoid wrote:Upgraded to version 9.0 facing similar issue : Root Cause(s) Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. did you find out any solution for this ??
Thanks
no but after a day or two the problem just went away
Anyone having solution please help
I am also facing the same problem. Server IOPS is 2000, still getting IOWAIT and ingesting latency error very frequently and then it goes away.
So we upgraded to 8.2.2.1 and are still getting the error. However it is a bit different than before.
Also seeing this issue after moving from 8.1.2 to 8.2.2. We are using older hardware, but this makes me think it is not necessarily related. It comes and goes throughout the day.
Same here, on Splunk Ent. v8.2.2
I am also having this issue but only one one of 6 splunk servers. The other Splunk servers do not have a tracker.log. This log is not listed in: https://docs.splunk.com/Documentation/Splunk/8.2.2/Troubleshooting/Enabledebuglogging#log-local.cfg as a splunk log so I wonder if it has something to be done with the upgrade.
It has been 1 week since my upgrade and this is the only server complaining. Would really like to know what this log is and why it is having issues. I checked file permissions and it is the same as the other logs....
This log is in /var/spool/splunk and is a default to be monitored in the /splunk/etc/system/default/inputs.con and is listed as a latency tracker. of my 6 servers only the search head running ES even has this log in the director y
I am going to reach out to support when I get a chance and will update here when I have found a solution/workaround of some sort. My OS is Linux and the log path/permission looks fine from my perspective as well. We upgraded over a month ago and this issue persists but only on our indexer. Our heavy forwarders are not affected by this.
Have you heard back from support regarding this issue? We have been running on 8.2.2 for several weeks without issue, but today noticed this on one of the search heads within the SHC.
My apologies, we actually redeployed for a separate issue we were facing so I never did contact them on this.
I am having this issue as well. Would appreciate any information you've been able to dig up.
Hi Marc,
We are facing the same issue after 8.2.1 upgrade
Have you already found a solution?
Greetings,
Justyna
No....I have not found a solution. However it appears to have cleared itself.
So we thought we had it resolved. However it is back again.
We restart the services and we can watch it go from good to bad.
Anyone else had luck finding an answer?
me too looking for a solution to address this ingestion latency....
We had this problem after upgrading to v8.2.3 and have found a solution.
After disabling the SplunkUniversal Forwarder, the SplunkLightForwarder and the SplunkForwarder on splunkdev01, the system returned to normal operation. These apps were enabled on the Indexer and should have been disabled by default. Also when trying to load a UniversalForwarder that is not compatible to v8.2.3, it will cause ingestion latency and tailreader errors. We had some Solaris 5.1 servers (forwarders) that are no longer compatible with upgrades so we just kept them on 8.0.5. The upgrade requires Solaris 11 or more.
The first thing I did was go to the web interface, Manage Apps and searched *forward*.
This showed the three Forwarders that I needed to disable and I disabled them on the interface.
I also typed these commands in unix on the indexer:
splunk disable app SplunkForwarder -auth <username>:<password>
splunk disable app SplunkLight -auth <username>:<password>
splunk disable app SplunkUniversalForwarder -auth <username>:<password>
After doing these things the ingestion latency and tailreader errors stopped.
FWIW, we just upgraded from 8.1.3 to 8.2.5 tonight, and are facing exactly these same issues.
Only difference is that these forwarder apps are already disabled on our instance.
Is there any update from Splunk support on this issue?
We upgraded from 8.7.1 to 8.2.6 and we have the same tracker.log latency issue.
Please help us SPLUNK...