Deployment Architecture

After upgrading to 6.5.0, why is search head cluster skipping about 50% of scheduled searches?

rbal_splunk
Splunk Employee
Splunk Employee

On 10 Node SHC deployment – post upgrade from 6.2.5 to 6.5.0 system, instance is skipping about 50% of the scheduled searches.

1 Solution

rbal_splunk
Splunk Employee
Splunk Employee

This issue has been resolved and the following steps were taken to debug and resolve the issue:

1)Observation 1: The following search showed that SHC members the delegatejob was taking up to 200seconds

Index=_internal source=*splunkd_access.log delegatejob | stats avg(spent) by host

2)Observation 2: It was seen that on SHC members ps -ef | grep splunk | search, a lot of Splunk launcher jobs were hanging.

To resolve the issue, the following changes were implemented:

On All SHC members implemented:

$SPLUNK_HOME/etc/system/local/limits.conf

[search] 
search_process_mode = traditional 

After above changes were made the skipping searches frequency has dropped significantly

View solution in original post

rbal_splunk
Splunk Employee
Splunk Employee

This issue has been resolved and the following steps were taken to debug and resolve the issue:

1)Observation 1: The following search showed that SHC members the delegatejob was taking up to 200seconds

Index=_internal source=*splunkd_access.log delegatejob | stats avg(spent) by host

2)Observation 2: It was seen that on SHC members ps -ef | grep splunk | search, a lot of Splunk launcher jobs were hanging.

To resolve the issue, the following changes were implemented:

On All SHC members implemented:

$SPLUNK_HOME/etc/system/local/limits.conf

[search] 
search_process_mode = traditional 

After above changes were made the skipping searches frequency has dropped significantly

onthebay
Path Finder

What is the reasoning behind why this resolves the issue? What has changed with the auto setting in 6.5?

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Introducing Splunk 10.0: Smarter, Faster, and More Powerful Than Ever

Now On Demand Whether you're managing complex deployments or looking to future-proof your data ...