Splunk Enterprise

Improve indexing thruput if replication queue is full.

hrawat
Splunk Employee
Splunk Employee

Here are the configs for on-prem customers willing to apply and avoid adding more hardware cost.
9.4.0 and above most of the indexing configs are automated that’s why dropped from 9.4.0 suggested list.

Note: Assuming replication queue is full for most of the indexers and as a result indexing pipeline is also full however indexers do have plenty of idle cpu and IO is not an issue.


On-prem Splunk version 9.4.0 and above
Indexes.conf

[default]
maxMemMB=100

Server.conf
[general]
autoAdjustQueue=true ( It can be applied on on any splunk instance UF/HF/SH/IDX)

Splunk version 9.1 to 9.3.x
Indexes.conf
[default]
maxMemMB=100
maxConcurrentOptimizes=2
maxRunningProcessGroups=32
processTrackerServiceInterval=0

Server.conf
[general]
parallelIngestionPipelines = 4
[queue=indexQueue]
maxSize=500MB
[queue=parsingQueue]
maxSize=500MB
[queue=httpInputQ]
maxSize = 500MB

maxMemMB, will try to minimize creation of tsidx files as much as possible at the cost of higher memory usage by mothership(main splunkd).
maxConcurrentOptimizes, on indexing side it’s internally 1 no matter what the setting is set to. But on target replication side launching more splunk-optimize processes means pausing receiver until that splunk-optimize process is launched. So reducing it to keep receiver do more of indexing work than launching splunk-optimize process. With 9.4.0, both source (indexprocessor) and target(replication in thread) will internally auto adjust it to 1.
maxRunningProcessGroups, allow more splunk-optimize processes concurrently. With 9.4.0, it's auto.
processTrackerServiceInterval, run splunk-optimize processes ASAP. With 9.4.0, you don't have to change.
parallelIngestionPipelines, have more receivers on target side. With 9.4.0, you can enable auto scaling of  pipelines.
maxSize, don’t let huge batch ingestion by HEC client block queues and receive 503. With 9.4.0 autoAdjustQueue set to true, it's no more a fix size.

Labels (1)
Tags (1)
0 Karma

PickleRick
SplunkTrust
SplunkTrust

One question though - won't the parallelIngestionPipelines starve the searches of cpu cores?

0 Karma

hrawat
Splunk Employee
Splunk Employee

Added a note to  the original post that indexers are having no IO issues and plenty of idle cpu.
This post is for the scenario where  replication queue is full causing pipeline queues full as well but plenty of resources(cpu/IO) are still available. 

0 Karma
Get Updates on the Splunk Community!

Splunk Observability Cloud's AI Assistant in Action Series: Auditing Compliance and ...

This is the third post in the Splunk Observability Cloud’s AI Assistant in Action series that digs into how to ...

Splunk Community Badges!

  Hey everyone! Ready to earn some serious bragging rights in the community? Along with our existing badges ...

What You Read The Most: Splunk Lantern’s Most Popular Articles!

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...