Splunk Cloud Platform

Splunk Forwarders Buffer size and best practices?

SplunkExplorer
Contributor

Hi Splunkers, I have some doubts about forwarder buffer, both universal and heavy.

The starting point is this: I know that, if an indexer goes down and it is receiving data by a UF,  this has a buffering mechanism to store data and send them to proper destination once it is up and running again. If I'm not wrong, the limits of this buffer can be set on a config file (I don't remeber well wich one). Now, the question are:

1. Even if the answer can be obvious, this mechanism is already available for HF?
2. How can I decide the maximum size of my buffer? is there a pre set limit or it depends on my environments?

Labels (1)
0 Karma

SanjayReddy
SplunkTrust
SplunkTrust

Hi @SplunkExplorer 

theses attributes in outputs.conf will take care these 

maxQueueSize
autoLBFrequency
autoLBVolume
useAck

https://docs.splunk.com/Documentation/Splunk/8.2.7/Admin/Outputsconf 

SanjayReddy_0-1666883578695.png

SanjayReddy_1-1666883655795.png

SanjayReddy_2-1666883753342.png

 

 

 

0 Karma

SplunkExplorer
Contributor

Hi @SanjayReddy , thanks a lot. I didn't remember about the auto fields, I got memories only about useack and maxqueue size, very usefull.

About the max queue size, are there any souces I can use to understand the best/max value I can set in my environments? I ask this because my final question is: what is the max value I can set in maxQueueSize It depends by my hardware availability or there a limit I cannot overwhelm? 

 

0 Karma
Get Updates on the Splunk Community!

Observability Unlocked: Kubernetes Monitoring with Splunk Observability Cloud

 Ready to master Kubernetes and cloud monitoring like the pros? Join Splunk’s Growth Engineering team for an ...

Update Your SOAR Apps for Python 3.13: What Community Developers Need to Know

To Community SOAR App Developers - we're reaching out with an important update regarding Python 3.9's ...

October Community Champions: A Shoutout to Our Contributors!

As October comes to a close, we want to take a moment to celebrate the people who make the Splunk Community ...