Knowledge Management

What are some best practices for determining the limits for the number of data inputs on heavy forwarder?

Loves-to-Learn Lots

I am looking for best practices for determining when I have reached the limits of the number of data inputs that should be set up on a heavy forwarder.

I have an existing heavy forwarder where I am running DB Connect to query ~20 different databases and frequencies. On this same heavy forwarder I have ~250 data inputs that query rest api's for various storage appliance data.

I am experiencing splunk daemon stability issues when my Linux server is rebooted or the Splunk daemon is restarted. The CPU load will max out on the configuration and cause the Splunk daemon to be shut down. My heavy forwarder is a virtual server with 14 vcpu and 16GB memory. It is running on RHEL7 with the ulimits set as Splunk specified.

Are there any documented configurations for heavy forwarders? Is there anything that might help besides trying to increase resources on this server such as configuration file settings?

Labels (1)
0 Karma


Hi @mttilley65,

as suggested by @isoutamo you could use the Monitoring Console App to find the queues and identify how much the queue is full..

in addition you could use this search to have the same result:

index=_internal  source=*metrics.log sourcetype=splunkd group=queue 
| eval name=case(name=="aggqueue","2 - Aggregation Queue",
   name=="indexqueue", "4 - Indexing Queue",
   name=="parsingqueue", "1 - Parsing Queue",
   name=="typingqueue", "3 - Typing Queue",
   name=="splunktcpin", "0 - TCP In Queue",
   name=="tcpin_cooked_pqueue", "0 - TCP In Queue") 
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size) 
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size) 
| eval fill_perc=round((curr/max)*100,2) 
| bin _time span=1m
| stats Median(fill_perc) AS "fill_percentage" by host, _time, name 
| where (fill_percentage>70 AND name!="4 - Indexing Queue") OR (fill_percentage>70 AND name="4 - Indexing Queue")

This search says to you which are the full queues and how much data exceed the queue value, so you can set the correct value for the queue parameters.



0 Karma


In this answer I have added couple of links "How to solve this kind of issues"

r. Ismo

0 Karma



you could add it to MC as an indexer. After that you could monitor it like any other indexers and see when it’s limits are fulfilled. I propose to you add own custom groups in MC for all HFs.

r. Ismo

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...