My company has two massive machines as search heads: 256GB RAM and 24 cores each.
The indexers are equipped just fine as well.
Then we have searches that query a lot of data.
- The RAM is never used, at most 50-70GB
- The search times grow exponentially. Timerange 1 hour -> fine. Timerange 8h -> 20x duration
What could be the limiting factors that hinder performance?
What values in limits.conf promise better performance by tweaking?
The settings in limits.conf are described shortly in http://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf
but don't guide me to a better understanding.
I.e what are good values for maxmemusage_mb and maxresultrows?
Any help and hints appreciated
The indexers do the majority of the work, and Splunk scales horizontally. What does your indexing infrastructure look like? Massive search heads give you massive concurrent search capability, but not faster search performance. For that, you need to add more/faster indexers.
Just to know, did you modify one of this settings ?
* To be active, this setting requires setting: enablememorytracker = true
* Signifies the maximum memory in MB the search process can consume in RAM.
* Search processes violating the threshold will be terminated.
* If the value is set to zero, then splunk search processes are allowed
to grow unbounded in terms of in memory usage.
* The default value is set to 4000MB or 4GB.
* The stack size (in bytes) of the thread executing the search.
* Defaults to 4194304 (4 MB)
* The number of search job status data splunkd can cache in RAM. This cache
improves performance of the jobs endpoint
* Defaults to 10000"
What is you usage per core ?
perhaps did you already made empirical settings and compare the processing time obtained for the same search ?