Deployment Architecture

distsearch.conf replicationThreads

sfmandmdev
Path Finder

What factors should be taken into consideration in deciding the appropriate number of replicationThreads? Are there any performance considerations for increasing this value?

1 Solution

Ledion_Bitincka
Splunk Employee
Splunk Employee

The default number of replication threads is set to 5 by default. You should take into consideration the number of search peers, the network bandwidth between search head and peers as well as the size of the bundles when changing this value. Consider increasing it if you have a lot of peers, the link between SH and peers is slow and the bundle are large. If you have SSL enabled between SH and peers you will incur a high cpu load during the transfer because the bundle will be encrypted, compressed and sent to the peers by X thread in parallel.

View solution in original post

Ledion_Bitincka
Splunk Employee
Splunk Employee

The default number of replication threads is set to 5 by default. You should take into consideration the number of search peers, the network bandwidth between search head and peers as well as the size of the bundles when changing this value. Consider increasing it if you have a lot of peers, the link between SH and peers is slow and the bundle are large. If you have SSL enabled between SH and peers you will incur a high cpu load during the transfer because the bundle will be encrypted, compressed and sent to the peers by X thread in parallel.

Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...