Splunk Enterprise

Server unreachable causing heavy forwarders to exhaust resources preventing log ingestion- Is this preventable?

corvoattano44
New Member

I am sending logs to a non-splunk server using syslog udp from the heavy forwarders which works fine. But recently the remote non-splunk server went down and the heavy forwarders were not able to reach it. As a result, there were multiple queues build-up which used up all of the resources to the point that all the existing log ingestion stopped on the heavy forwarders. Also, some of the heavy forwarders reported Splunk service not running.

Is there a way to prevent this from happening again in the future? What I want to make sure is if the remote server goes down in the future, the queues does not build up and the resources are not exhausted so that log ingestion still works?

 

 

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Tech Talk Recap | Mastering Threat Hunting

Mastering Threat HuntingDive into the world of threat hunting, exploring the key differences between ...

Observability for AI Applications: Troubleshooting Latency

If you’re working with proprietary company data, you’re probably going to have a locally hosted LLM or many ...

Splunk AI Assistant for SPL vs. ChatGPT: Which One is Better?

In the age of AI, every tool promises to make our lives easier. From summarizing content to writing code, ...