Monitoring Splunk

Is there a way to calculate bandwidth requirements for Splunk index replication in a indexer cluster?

keithyap
Path Finder

Basically this situation is this:

Customer asked what would be their bandwidth requirements for the replication between indexers.

Say if the license size per day is 200GB, with compression roughly 50% indexed data stored should be about 100GB.
now they have 2 indexers in the cluster with repfactor of 2 and search factor of 2.
so my calculation is below (not sure if it is correct)

based on Splunk docs the 50% consist of the below:
15% for the rawdata file.
35% for associated index files.

Total rawdata = (100*0.15)* 2 (this is the rep factor) = 30 GB
Total index files = (100*0.35)* 2 (this is the search factor) = 70 GB

So a total of 100GB of data will be replicated.

for the bandwidth calculation of 100GB per day:
(100/86400)*1024*1024 = 1213.63 KB/s

This is what I have come up with so far. Any advise would be appreciated.
Also what happens if it is a multisite cluster..

493669
Super Champion

@keithyap,
have a look at this site
https://splunk-sizing.appspot.com/
provide your inputs like daily indexing size,no. of indexer etc. and it will calculate required sizing

0 Karma

keithyap
Path Finder

@493669 Thanks for the quick reply, regarding the sizing I have used this website before.
Sadly however, currently what I need to find out now is the bandwidth requirements for replication of the data not the storage sizing itself. =(

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...