Deployment Architecture

Search Head in a Search Head Cluster is in manual detention, but still taking searches


I need to upgrade a Search Head Cluster from 7.3.4 to 8.1.9 and I have run the first two commands:

splunk upgrade-init shcluster-members

splunk edit shcluster-config -manual_detention on

We are monitoring the active searches using the following command:

splunk list shcluster-member-info | grep "active"

And we see:


And it seemed to never reduce down to 0 for the active_historical_search_count, but after 90 minutes, it seems to have come down to 0. We checked the currently-running searches and found some new searches running on the detention server after 1 hour. We have the following set in the server.conf:

decommission_force_finish_idle_time = 0

decommission_node_force_timeout = 300

decommission_search_jobs_wait_secs = 180 why is it taking 90 minutes to stop running savedsearches?

We did find some savedsearches that were running for long times and we fixed them, but should not all new searches be moved to another server once it is in manual detention? What can I do to fix this, so that my SHC can be upgraded?

Labels (1)
0 Karma



based on documentation those decommission_* parameters are valid only on search peers (indexers) or cluster master and only when you are doing "splunk offline" command. See:

When you have put SHC member manually into detention mode, it just wait that searches will finished.

  • On a search head that is in manual detention but not a part of a searchable rolling restart. These searches will run to completion.
  • On a search head that is a part of a rolling upgrade. During rolling upgrade of a search head cluster, you can put a single search head into manual detention and wait for the existing search jobs to run to completion before you shut down the search head.

I'm not sure if there is any way to force gracefully those sessions on SHC member? I usually just wait some time and after that cancel those jobs if needed.

I agree with you that on detention mode it shouldn't accept any new queries from schedule or users anymore. But maybe there is error in documentation and it means that it don't accept any new sessions from user to this node? 

r. Ismo

0 Karma
Get Updates on the Splunk Community!

Splunk Forwarders and Forced Time Based Load Balancing

Splunk customers use universal forwarders to collect and send data to Splunk. A universal forwarder can send ...

NEW! Log Views in Splunk Observability Dashboards Gives Context From a Single Page

Today, Splunk Observability releases log views, a new feature for users to add their logs data from Splunk Log ...

Last Chance to Submit Your Paper For BSides Splunk - Deadline is August 12th!

Hello everyone! Don't wait to submit - The deadline is August 12th! We have truly missed the community so ...