- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi. Dears
Somebody can help me?
I need know what is the capacity of my Splunk to execute concurrents searches
This is currently the CPU capacity my servers
Server..........Physical_CPU......Vitual_CPU......CorexSocket......Socket......Thread x core......#CPUs
SH1.......................2..........................-.......................10.....................2.....................2.....................40
SH2 ......................1..........................-.........................8.....................1.....................2.......................6
SH3(virtual)..........-...........................2.........................6.....................2.....................1.....................12
IDX1......................2.........................-..........................4.....................2.....................2.....................16
IDX2......................2.........................-..........................8.....................2.....................1.....................16
IDX3(virtual).........-...........................2.........................8.....................2.....................1.....................16
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The default will be base_max_searches(defaults to 6) + max_searches_per_cpu(defaults to 1) i.e. a 32-core host can run 38 concurrent searches out of the box.
The scheduler can use up to max_searches_perc (defaults to 60) i.e. up 23 of 38 total searches on a 32-core host, the rest being reserved for adhoc queries.
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf
Btw, if you are running the monitoring console in distributed mode, it will show you the number of CPU Cores Splunk recognizes under the instances tab (The virtual number is what Splunk will use)
Example: CPU Cores (Physical / Virtual)
4 / 8
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

correction: SH1 = ((6+40)*60)/100 = 27 Max concurrente searches
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi, Rob, thank you for your answer.
Then, to this case:
Considering 60% to scheduler searches
SH1 = ((6+48)*60)/100 = 32 Max concurrente searches
SH2 =((6+16)*60)/100 = 13 Max concurrente searches
SH3 = ((6+12)*60)/100 = 11 Max concurrente searches
The total to the Cluster is : 32+13+11= 56 concurrent searches that my system can
approximately execute?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, that looks valid for maximum searches than can be run by the Splunk scheduler. Note that you can adjust max_searches_per_cpu (although usually not a good idea). max_searches_perc could be adjusted up/down 10% etc. depending on whether the system will be running mostly scheduled searches/alerts vs. user adhoc searches.
Total Searches able to execute (scheduled and adhoc) would be:
SH1 = ((6+48) = 54 Max concurrente searches
SH2 =((6+16) = 22 Max concurrente searches
SH3 = ((6+12) = 18 Max concurrente searches
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

An latest question.
In the implemnattion I see this configuration by SH
limits.conf
max_historical_searches_per_cpu=4
base_max_searches = 6
then to this cluster will be:
max_historical_searches_per_cpu x number_of_cpus + base_max_searches
4*(40+16+12)+6 = 278
Or calculate one by one and after sum it?
4*40+6 =166
4*16+6= 70
4*12+6= 54
= 290
The results are differents
Well this configuration is wrong, but I dont have problems because the average of concurrent searches is the only 1.40
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

And , I see an ancient message on the splunkd.log, say:
05-09-2019 13:46:11.904 -0400 ERROR SHCMaster - Search not executed: The maximum number of historical concurrent system-wide searches has been reached. current=108 maximum=105 for search: admin;xxxxxxxxx
But I don´t how obtained this value: 105 ??
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Thanks for your help, Now I have it more clear
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The default will be base_max_searches(defaults to 6) + max_searches_per_cpu(defaults to 1) i.e. a 32-core host can run 38 concurrent searches out of the box.
The scheduler can use up to max_searches_perc (defaults to 60) i.e. up 23 of 38 total searches on a 32-core host, the rest being reserved for adhoc queries.
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf
Btw, if you are running the monitoring console in distributed mode, it will show you the number of CPU Cores Splunk recognizes under the instances tab (The virtual number is what Splunk will use)
Example: CPU Cores (Physical / Virtual)
4 / 8
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

When I pressed "submit" I just saw your Monitoring Console paragraph. 😄 Deleted my comment.
Skalli
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

hi, try again
