We are using Splunk 7.1.1 with three search heads in a cluster environment.Each search head has 40 CPU cores.A lot of our saved searches are getting skipped because of the maximum concurrent searches are reached(69).I tried following some the answers posted previously and made change in savedsearches.conf --realtime_schedule = 0.
After making the above change I see lot searches going into status=continued.
Questions:
1) will the continued searches run ,when do they run and how to check if they have ran
2) Is there any other way or any .conf changes that i need to make to make this searches run
3)What changes can I make in the limits.conf file to get the searches running
Thanks,
Vineeth
You should check your indexer I/O, look at the reason for the skips by checking the internal index, and check out the number of scheduled searches to see if they are stacked on top of each other. You should probably set your schedule_window=auto rather than the default of 0 too.
Hello @skopelpin,
We are using Splunk 7.1.1 and I went to the particular alert and clicked on the advance edit and changed the schedule_windoow to auto from 0.Is this the process or do we need to change by logging into each search head
A better method would be to change the default value on the deployer and push it out to all the search heads so new alerts will have an auto value
Hi @vrmandadi,
Have you looked at answer https://answers.splunk.com/answers/270544/how-to-calculate-splunk-search-concurrency-limit-f.html to calculate how many different type of searches you can run concurrently on Splunk ?
Additionally as you mentioned that each search head have 40 CPU cores which means each search head can run 46 historical search concurrently. Now you need to check whether captain is delegating scheduled searches to all search heads or not, more information you can find on this question https://answers.splunk.com/answers/329699/why-does-my-search-head-cluster-captain-start-dele-1.html
If all search heads are running at full capacity means all CPU cores occupied by scheduled searches concurrently then I'll suggest to add more search head nodes into search head cluster or fine tune searches which are taking longer to run.
Hello harsmarvania,
Thank you for your reply,
Well My question is relating to status=continued on whether this searches run again or will they be in the same state?.I did check the whether the search head is delegating searches to all search heads and that looks good.Is there a way that we can configure for particular searches to to take high priority than other when multiple searches run at a atime
Can I change the max_searches_per_cpu=2 from 1.Does this help for 40 cores?
If you change max_searches_per_cpu=2
, your each search head will able to run 86 schedule searches but I'll not recommend this because it will impact search performance (splunk search head will divide same resources with double schedule searches compare to earlier and due to that it will take more time to run) and you'll not get benefit of it.
1) will the continued searches run ,when do they run and how to check if they have ran
Check the status based on sid on the searches with status=continued.
Hello @Somesoni2,
I did check them they are running late some are running after 3 hours ,so what is a good work around for these issues,what can be improved