I have a requirement from my client to control the bandwidth usage for index replication. He is concerned that the index replication would exceed certain bandwidth that it impacts daily business operation. May I know is there any configuration settings which I can use to mitigate the case for that purpose?
Thanks in advance!
just to make sure we are talking about the right thing:
100GB/day results in approx 15GB on disk (journal.gz). This data is streamed from the primary peer (indexer) to the secondary peer where it's indexed again (multi site cluster might do things differently).
So... 15GB/day is approx 1/6 megabyte/sec which is approx 1.5 MEGABIT/sec on the wire...
And yes, peeks apply.
Seriously, are we talking about bandwidth control in a datacenter where you have 1-40 GBit/sec?
If you want to tweak the mentioned settings, please involve Splunk PS before you do...
Hi you can try adjusting these parameter in server.conf file but be careful while doing changes:
[clustering] max_peer_build_load = <integer> * This is the maximum number of concurrent tasks to make buckets searchable that can be assigned to a peer. * Defaults to 2. max_peer_rep_load = <integer> * This is the maximum number of concurrent non-streaming replications that a peer can take part in as a target. * Defaults to 5. max_peer_sum_rep_load = <integer> * This is the maximum number of concurrent summary replications that a peer can take part in as either a target or source. * Defaults to 5.
Note that these settings need to be set on the Cluster Master, and a restart. Does not require a push or restart of indexing peers to have new settings take affect.
Hi, thanks for your suggestion, I will try it out. Thanks!