So we just updated to 8.2.1 and we are now getting an Ingestion Latency error…
How do we correct it? Here is what the link says and then we have an option to view the last 50 messages...
Here are some examples of what is shown as the messages:
07-01-2021 09:28:52.269 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\CrushFTP9\CrushFTP.log.
we are getting the same error on our Cluster Master and it's running version 9.0.0
we also opened a support case with Splunk will keep you all up to date on how it unfolds
Upgraded to version 9.0 facing similar issue : Root Cause(s) Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. did you find out any solution for this ??
no but after a day or two the problem just went away
So we upgraded to 184.108.40.206 and are still getting the error. However it is a bit different than before.
I am also having this issue but only one one of 6 splunk servers. The other Splunk servers do not have a tracker.log. This log is not listed in: https://docs.splunk.com/Documentation/Splunk/8.2.2/Troubleshooting/Enabledebuglogging#log-local.cfg as a splunk log so I wonder if it has something to be done with the upgrade.
It has been 1 week since my upgrade and this is the only server complaining. Would really like to know what this log is and why it is having issues. I checked file permissions and it is the same as the other logs....
This log is in /var/spool/splunk and is a default to be monitored in the /splunk/etc/system/default/inputs.con and is listed as a latency tracker. of my 6 servers only the search head running ES even has this log in the director y
I am going to reach out to support when I get a chance and will update here when I have found a solution/workaround of some sort. My OS is Linux and the log path/permission looks fine from my perspective as well. We upgraded over a month ago and this issue persists but only on our indexer. Our heavy forwarders are not affected by this.
We had this problem after upgrading to v8.2.3 and have found a solution.
After disabling the SplunkUniversal Forwarder, the SplunkLightForwarder and the SplunkForwarder on splunkdev01, the system returned to normal operation. These apps were enabled on the Indexer and should have been disabled by default. Also when trying to load a UniversalForwarder that is not compatible to v8.2.3, it will cause ingestion latency and tailreader errors. We had some Solaris 5.1 servers (forwarders) that are no longer compatible with upgrades so we just kept them on 8.0.5. The upgrade requires Solaris 11 or more.
The first thing I did was go to the web interface, Manage Apps and searched *forward*.
This showed the three Forwarders that I needed to disable and I disabled them on the interface.
I also typed these commands in unix on the indexer:
splunk disable app SplunkForwarder -auth <username>:<password>
splunk disable app SplunkLight -auth <username>:<password>
splunk disable app SplunkUniversalForwarder -auth <username>:<password>
After doing these things the ingestion latency and tailreader errors stopped.