Deployment Architecture

distsearch.conf replicationThreads

sfmandmdev
Path Finder

What factors should be taken into consideration in deciding the appropriate number of replicationThreads? Are there any performance considerations for increasing this value?

1 Solution

Ledion_Bitincka
Splunk Employee
Splunk Employee

The default number of replication threads is set to 5 by default. You should take into consideration the number of search peers, the network bandwidth between search head and peers as well as the size of the bundles when changing this value. Consider increasing it if you have a lot of peers, the link between SH and peers is slow and the bundle are large. If you have SSL enabled between SH and peers you will incur a high cpu load during the transfer because the bundle will be encrypted, compressed and sent to the peers by X thread in parallel.

View solution in original post

Ledion_Bitincka
Splunk Employee
Splunk Employee

The default number of replication threads is set to 5 by default. You should take into consideration the number of search peers, the network bandwidth between search head and peers as well as the size of the bundles when changing this value. Consider increasing it if you have a lot of peers, the link between SH and peers is slow and the bundle are large. If you have SSL enabled between SH and peers you will incur a high cpu load during the transfer because the bundle will be encrypted, compressed and sent to the peers by X thread in parallel.

Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...