Splunk Enterprise

Improve indexing thruput if replication queue is full.

hrawat
Splunk Employee
Splunk Employee

Here are the configs for on-prem customers willing to apply and avoid adding more hardware cost.
9.4.0 and above most of the indexing configs are automated that’s why dropped from 9.4.0 suggested list.

Note: Assuming replication queue is full for most of the indexers and as a result indexing pipeline is also full however indexers do have plenty of idle cpu and IO is not an issue.


On-prem Splunk version 9.4.0 and above
Indexes.conf

[default]
maxMemMB=100

Server.conf
[general]
autoAdjustQueue=true ( It can be applied on on any splunk instance UF/HF/SH/IDX)

Splunk version 9.1 to 9.3.x
Indexes.conf
[default]
maxMemMB=100
maxConcurrentOptimizes=2
maxRunningProcessGroups=32
processTrackerServiceInterval=0

Server.conf
[general]
parallelIngestionPipelines = 4
[queue=indexQueue]
maxSize=500MB
[queue=parsingQueue]
maxSize=500MB
[queue=httpInputQ]
maxSize = 500MB

maxMemMB, will try to minimize creation of tsidx files as much as possible at the cost of higher memory usage by mothership(main splunkd).
maxConcurrentOptimizes, on indexing side it’s internally 1 no matter what the setting is set to. But on target replication side launching more splunk-optimize processes means pausing receiver until that splunk-optimize process is launched. So reducing it to keep receiver do more of indexing work than launching splunk-optimize process. With 9.4.0, both source (indexprocessor) and target(replication in thread) will internally auto adjust it to 1.
maxRunningProcessGroups, allow more splunk-optimize processes concurrently. With 9.4.0, it's auto.
processTrackerServiceInterval, run splunk-optimize processes ASAP. With 9.4.0, you don't have to change.
parallelIngestionPipelines, have more receivers on target side. With 9.4.0, you can enable auto scaling of  pipelines.
maxSize, don’t let huge batch ingestion by HEC client block queues and receive 503. With 9.4.0 autoAdjustQueue set to true, it's no more a fix size.

Labels (1)
Tags (1)
0 Karma

PickleRick
SplunkTrust
SplunkTrust

One question though - won't the parallelIngestionPipelines starve the searches of cpu cores?

0 Karma

hrawat
Splunk Employee
Splunk Employee

Added a note to  the original post that indexers are having no IO issues and plenty of idle cpu.
This post is for the scenario where  replication queue is full causing pipeline queues full as well but plenty of resources(cpu/IO) are still available. 

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

.conf25 Global Broadcast: Don’t Miss a Moment

Hello Splunkers, .conf25 is only a click away.  Not able to make it to .conf25 in person? No worries, you can ...

Observe and Secure All Apps with Splunk

 Join Us for Our Next Tech Talk: Observe and Secure All Apps with SplunkAs organizations continue to innovate ...

What's New in Splunk Observability - August 2025

What's New We are excited to announce the latest enhancements to Splunk Observability Cloud as well as what is ...