Deployment Architecture

Is there a way of limiting the search load on an index cluster using configuration in the index cluster?

pellegrini
Path Finder

Is there a way of limiting the search load on a index cluster using configuration in the index cluster? E.g. setting a max limit for how many concurrent searches that is allowed to run simultaneous.

Lowering the parameters in limits.conf section Concurrency limits and savedsearches.conf does not have any effect on the indexer. These only seems to have an effect on the Search Head https://docs.splunk.com/Documentation/Splunk/9.0.2/Admin/Limitsconf

One environment I use have multiple standalone search heads that all execute searches in the same indexer cluster. The measured median concurrent searches on an indexer peer goes well above the max concurrent searches (twice the limit). That indexer cluster has default limits.conf.

Labels (1)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @pellegrini,

making the Splunk Certified Architect Course there are some information about the number of requested indexers for users and data ingestion.

In the reference hardware page (https://docs.splunk.com/Documentation/Splunk/9.0.2/Capacity/Referencehardware) there are three configurations for minimum. mid-tier anf high-performance level, but then don't give numerical indications about this.

My hint is only to use the Monitor Console to analyze the load (with special attention to the CPU Load)  during the day, to understand if your cluster can manage SHs requests or if you have queues.

In this case, you can improve the hardware reference in your systems.

Ciao.

Giuseppe

View solution in original post

gcusello
SplunkTrust
SplunkTrust

Hi @pellegrini,

making the Splunk Certified Architect Course there are some information about the number of requested indexers for users and data ingestion.

In the reference hardware page (https://docs.splunk.com/Documentation/Splunk/9.0.2/Capacity/Referencehardware) there are three configurations for minimum. mid-tier anf high-performance level, but then don't give numerical indications about this.

My hint is only to use the Monitor Console to analyze the load (with special attention to the CPU Load)  during the day, to understand if your cluster can manage SHs requests or if you have queues.

In this case, you can improve the hardware reference in your systems.

Ciao.

Giuseppe

pellegrini
Path Finder

Thanks @gcusello 

Is it true, what I am stating in the question, that its not possible to limit max concurrent searches on the search peer?

Based on what you're saying I guess we should never hit these limits if we follow the reference hardware sizing. It's just that it is cheaper and quicker to change thing with software and config. This environment works fine under normal conditions but there are exceptions where this type of limit would keep the load on a level so that indexing and replication works well. As a workaround we reduced search load manually during high load situations.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @pellegrini,

you can define the max concurrent user searches in the Roles definition, but this limit isn't usually applicable to scheduled searches.

For scheduled searches, the best approach is to analyze the load (using Monitoring Console), optimize search scheduling and eventually add more resources to the systems.

Ciao.

Giuseppe

Tags (1)
Get Updates on the Splunk Community!

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...