I have splunk instance with 9.0.3 version and my splunk keeps throwing error in Forwarder Ingestion Latency with Root Cause " Ingestion_latency_gap_multiplier' indicator exceeds configured value. Observed value is 2587595". does anyone know how to solve this problem?
Halo,
i've solve this issue, the main problem is with UF in my agent. I just need to delete and reinstall the UF and the error is gone.
I too, was seeing a similar message, with the GUID and IP of the UF that was supposedly having an issue. Accompanying that, I was getting an email from an alert I'd set up for "UFs no longer sending logs", and my monitoring console also showed it was missing.
However, if I did a query for it on a search head, I was definitely still seeing current events coming in, and my deployment server said it was still checking in.
This is in a mixed environment of the architectural Splunk components (MC, CM, DSLM, SHs, HFs, IDXs) running on Linux, and the majority if UFs running on Windows. Due to my department, I do not have OS access to those Windows servers.
As an experiment, I created a simple text file on the DS, set it to restart Splunkd, added it to new server class, and assigned only the problem UF client to it. As expected, once the client got the file, the UF restarted and the symptoms went away.
@PickleRickWould removing the tracker.log have solved the issue as well? I had the admin, who had OS access to it, restart the UF, but it did not solve the issue. Maybe him just restarting the UF wouldn't have been enough and would have just come back up using the same tracker.log?
Well, I can't tell you if it would have solved your problem because I have no idea if it was the same problem. It had the same symptoms but maybe the underlying cause was different. It could have solved it if it was the same problem 🙂
Halo,
i've solve this issue, the main problem is with UF in my agent. I just need to delete and reinstall the UF and the error is gone.
1. Verify if you do have the latency problem. Check your data coming from the given forwarder and check if it does indeed show delay in indexing.
2. It seems that it's sometimes a case of the forwarder not handling properly the $SPLUNK_HOME/var/spool/splunk/tracker.log* (based on which the alert is generated) and old values are not removed from the file but instead are reingested as the new values are appended to it. Try stopping the forwarder, removing the tracker.log file and restarting the forwarder.