Getting Data In

How to calculate maximum *nix heavy forwarder capacity/thruput based on memory & cores

abrarfakhri
Path Finder

Fellow Splunkers!
I've spent a lot of time on both the answers and splunkbase sites but can't seem to find a simple formula for this.

I am trying to determine the maximum *nix heavy forwarder capacity with 2 cores and 4GB physical memory. I know this is not up to standard but I am guessing there must be a simple formula we can apply to determine thruput.

0 Karma

somesoni2
Revered Legend

This should give you helpful stats

Query1 - overall stats per host

index=_internal sourcetype=splunkd source=*metrics.log group=per_host_thruput | stats max(kbps) as max_kbps max(eps) as max_eps max(kb) as max_kb max(ev) as max_events by host

Query2 - for different granularity

index=_internal sourcetype=splunkd source=*metrics.log group=per_host_thruput | bucket span=1m _time | stats max(kbps) as max_kbps max(eps) as max_eps sum(kb) as kb sum(ev) as ev by _time,host | bucket span=1h _time | stats max(max_kbps) as max_kbps max(max_eps) as max_eps max(kb) as max_kbpm max(ev) as max_epm sum(kb) as kb sum(ev) as ev by _time, host | bucket span=1d _time | stats max(max_kbps) as max_kbps max(max_eps) as max_eps  max(max_kbpm) as max_kbpm max(max_epm) as max_epm max(kb) as max_kbph max(ev) as max_eph sum(kb) as kb sum(ev) as ev by _time, host
0 Karma

abrarfakhri
Path Finder

I still hope someone would be able to answer this question...

0 Karma

masonmorales
Influencer

There isn't a formula for what you are asking. Just like there isn't a formula for determining how many search heads you should have. There are simply too many variables at play.

Your best bet would be to setup a test instance with your given specs, load the apps you plan on running, and then send a bunch of test data at it until you see either full queues in splunkd.log or you've exhausted CPU and/or memory resources.

0 Karma

abrarfakhri
Path Finder

somesoni2, I appreciate your response. But, this is not what I was looking for. If I am reading this right, I think what the above queries provide is the maximum thruput that the HF is achieving today. What I am looking for is the maxmimum thruput the HF can achieve given the limitation of 2 cores and 4GB memory.

This would be similar to asking "How many concurrent searches can a SH achieve?" Well, the answer is dependent on the # of cores.

Hope that clarifies the question a bit more.
Looking forward to your response!

0 Karma

abrarfakhri
Path Finder

To be more specific, the maximum thruput should be measured in max events per sec, max kbps, max load etc.

Here is the current readout from metrics.log:

02-03-2016 15:11:51.891 -0500 INFO  Metrics - group=thruput, name=index_thruput, instantaneous_kbps=762.769461, instantaneous_eps=1963.774257, average_kbps=862.     173209, total_k_processed=4834560107.000000, kb=23645.852539, ev=60877.000000, load_average=2.830000
0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...