We have an internal built application management tool to start Splunk and monitor it's process status.
Search Head Clustering rolling restarting would basically break that.
Provided that we'd like to manage the restart ourselves, can we turn this feature off?
I saw an attribute called "percent_peers_to_restart" in server.conf. Can we achieve that by setting it to 0?
I would be hesitant at trying to replace the deployer functionality. The Search Heads are meant to be kept in close synchronization, so that field extractions, lookups, etc... are kept the same and the same results are returned regardless of search head.
It's unclear from your question what the exact circumstances are for which you are trying to avoid a rolling restart. In search head clustering, rolling restart occurs only in two situations:
After the deployer pushes updates to cluster members. (And, even then, not all updates require a restart. See http://docs.splunk.com/Documentation/Splunk/6.2.4/DistSearch/PropagateSHCconfigurationchanges#Push_t... )
When you initiate a rolling restart with the splunk rolling-restart command.
You say that you are using an "internal built application management tool". If that tool replaces the deployer functionality, then you can just restart members as you want, using whatever method you prefer.
If that tool does not replace the deployer, then presumably your question is about how to avoid a potential rolling restart after pushing updates via the deployer. Unfortunately, there is no way to prevent a potential rolling restart, post-deployer push. Your plan to set percent_peers_to_restart to 0 won't do that, because, as the spec file for server.conf states, "regardless of setting, a minimum of 1 peer will be restarted per round."
Our intention is not the replace deployer or configuration distribution mechanism. We just need stop/start to be managed by ourselves. I think it would be fair if Splunk can break rolling restart as optional.