Splunk Enterprise

Forwarder Ingestion Latency

Sinfo
New Member

The IP address keeps changing with the same error.

Forwarder Ingestion Latency
Cause(s) d'origine : Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 272246. Message from D97C3DE9-B0CE-408F-9620-5274BAC12C72:192.168.1.191:50409

How do you solve the problem?

Labels (2)
0 Karma

mattymo
Splunk Employee
Splunk Employee

This is part of the splunkd health report.

It is configured in health.conf

Would suggest reviewing if this "forwarder" is sending old files or actually is falling behind or have some clean up needed on its ingestion tracker values.  

- MattyMo
0 Karma

fatsug
Builder

So, how does one isolate the affected forwarder?

The error message reads

Forwarder Ingestion Latency

 

  • Root Cause(s):
    • Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 89. Message from <UUID>:<ip-addrs>:54246
  • Unhealthy Instances:
    • indexer1
    • indexer2

     

    The "message from" section just lists the UUID, an IP adress and a port. Which part here would help me find the actual forwarder? The UUID does not match any "Client name" under forwarder management on the deployment server. The IP adress does not match a server on which I have a forwarder installed.

    One or a few of the indexers are listed as "unhealthy instances" each time. But the actual error sounds like it lives in the forwarder end and not on the indexer.

    With the available information in this warning/error. How can I figure out which forwarder is either experiencing latency issues OR need to have that log file mentioned flushed.

 

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Splunk AI Assistant for SPL vs. ChatGPT: Which One is Better?

In the age of AI, every tool promises to make our lives easier. From summarizing content to writing code, ...

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...