Splunk Dev

How to adjust concurrent search on a indexer cluster

xsstest
Communicator

The system is approaching the maximum number of historical searches that can be run concurrenty .current=25 maximum=25

I always receive such a message.My indexer cluster has five peers, one master node, The hardware configuration of these five peers is (CPU:16C mem:64G HD:2048G) . and SHC have 4 search header. The hardware configuration of search header is (CPU:8C 、mem:24G、HD:200G)

first question:

1、According to my hardware configuration, how to calculate the number of my concurrent searches to achieve the fastest speed to return to search results

2、How do you distribute limits.conf in a cluster? On the master node, should I edit limits.conf in which directory?

Tags (1)
0 Karma
1 Solution

nnmiller
Contributor

Splunk automatically calculates the concurrency. If you change the CPU configuration of VMs, you should verify that Splunk recalculates properly.

Calculation:

max_searches_per_cpu x number_of_cpus + base_max_searches = max_hist_searches
(max_searches_perc / 100) x max_hist_searches = max_hist_scheduled_searches
max_rt_search_multiplier x max_hist_searches = max_realtime_searches

The default settings are base_max_searches = 6, max_searches_per_cpu = 1, max_searches_per = 50, and max_rt_search_multiplier = 1. So in your case:

((1 * 😎 +6) = 14 = max_hist_searches per SH
4 * 14 = 56 = max search head cluster concurrency

((50 /100) * 14 ) = 7 = max_historical_scheduled_searches per SH
4 * 7 = 28 max search head cluster scheduled search concurrency

1 * 14 = 14 = max_realtime_searches per SH
4 * 14 = 56 = max rt search head cluster concurrency

How you have quota enforcement set on the SHC will also affect perceived concurrency at the SHC captain (please see How the cluster enforces quotas.

Additionally, if there are communication problems in your SHC that prevent the SHs from reporting back that scheduled jobs have been completed, the captain will not remove their search jobs from the queue, resulting in delays or failure to delegate jobs by the captain.

View solution in original post

0 Karma

nnmiller
Contributor

Splunk automatically calculates the concurrency. If you change the CPU configuration of VMs, you should verify that Splunk recalculates properly.

Calculation:

max_searches_per_cpu x number_of_cpus + base_max_searches = max_hist_searches
(max_searches_perc / 100) x max_hist_searches = max_hist_scheduled_searches
max_rt_search_multiplier x max_hist_searches = max_realtime_searches

The default settings are base_max_searches = 6, max_searches_per_cpu = 1, max_searches_per = 50, and max_rt_search_multiplier = 1. So in your case:

((1 * 😎 +6) = 14 = max_hist_searches per SH
4 * 14 = 56 = max search head cluster concurrency

((50 /100) * 14 ) = 7 = max_historical_scheduled_searches per SH
4 * 7 = 28 max search head cluster scheduled search concurrency

1 * 14 = 14 = max_realtime_searches per SH
4 * 14 = 56 = max rt search head cluster concurrency

How you have quota enforcement set on the SHC will also affect perceived concurrency at the SHC captain (please see How the cluster enforces quotas.

Additionally, if there are communication problems in your SHC that prevent the SHs from reporting back that scheduled jobs have been completed, the captain will not remove their search jobs from the queue, resulting in delays or failure to delegate jobs by the captain.

0 Karma

adonio
Ultra Champion

hello there,
your search heads are not with compliance with splunk reference hardware
read here:
http://docs.splunk.com/Documentation/Splunk/6.6.2/Capacity/Referencehardware
will recommend to check couple of things before digging to limits.conf
do you have real time searches?
do you have many saved searches that are set to run at the same time? (every 5 min or so)
do you have long lasting searches? (searches that are taking long time to complete tying cores)
how many users do you have?
if you find something from above, is it related to a certain user?
check your DMC (MC) for the above items
many times, you will encounter this just because splunk is not being used properly
hope it helps

0 Karma

masonmorales
Influencer

@adonio I've converted your comment to an answer. I agree that the search heads do not have sufficient cores to run many concurrent searches on the indexers. Investigating what searches are currently running is definitely a good starting point if you can't do anything immediately about the hardware.

0 Karma

xsstest
Communicator

@adonio Well, if I improve the hardware configuration of the search head. How should I configure concurrent search?

0 Karma

adonio
Ultra Champion

will highly recommend to leave limits.conf where it is at. adding cpu will give you more concurrent searches ability right away. so you will not need to add any other items.
note, if you are running on virtual machines and you would add dedicated cores to your search heads, i think it requires a restart of the server, pay attention to shutdown splunk properly before you do so. specially since you are in a Search Head Cluster architecture

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...