Getting Data In

Splunk 6.2.3 Universal Forwarder maxQueueSize: What is the algorithm used to determine the amount of memory to use?

lisaac
Path Finder

The outputs.conf.spec shows a default value of "auto". The Splunk Universal Forwarder version is 6.2.3 on RHEL 6.6. What is the algorithm used to determine the amount of memory to use? I have OS personnel asking what is the possible memory maximum for the agent.

0 Karma
1 Solution

jeffland
SplunkTrust
SplunkTrust

The docs say

* If set to auto, chooses a value depending on whether useACK is enabled.
* If useACK=false, uses 500KB
* If useACK=true, uses 7MB

There is no algorithm, it uses one of two presets based on expected needs (an environment with acknowledgement enabled is probably going to need a bigger queue).

View solution in original post

jeffland
SplunkTrust
SplunkTrust

The docs say

* If set to auto, chooses a value depending on whether useACK is enabled.
* If useACK=false, uses 500KB
* If useACK=true, uses 7MB

There is no algorithm, it uses one of two presets based on expected needs (an environment with acknowledgement enabled is probably going to need a bigger queue).

nehabhuti
New Member

Could you please share the docs that specify the maximum value that can be used for maxQueueSize in outputs. conf for Splunk version 6.6.0? 

0 Karma

lisaac
Path Finder

This is good information. Do you know of 6.2.x or 6.3.x supports persistent output queues to disk vs. using more memory?

0 Karma

jeffland
SplunkTrust
SplunkTrust

I don't think so. The only available persistent queues are used at the input stage (to not lose any input from a busy tcp input, for example) as described here. I don't think writing stuff to disk prior to indexing it is ever possible in splunk (except for the mentioned input queue) because of the principle it employs: your data will only ever "be" in one stage at a time really, so once you have "read" an input, the data is on its way to the indexer, passing any and all queues and pipelines. That means that if one of the stages is blocked or unavailable, your data will simply wait behind that stage (ultimately leading to a waiting file read or the need for a persistend input queue if it's a streaming input). That means intermediate persistency is unneccesary and could cause more problems than it solves.

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...