With multiple clustered search heads in production, how do I best manage multiple deployers?
What you could do is set up a dedicated deployment server to be used by your search head cluster masters or use your existing. Then within the deploymentclient.conf change the repositoryLocation on your cluster masters.
[deployment-client]
repositoryLocation = $SPLUNK_HOME/etc/shcluster/apps
[target-broker:deploymentServer]
targetUri= deploymentserver.splunk.mycompany.com:8089
This method will update $SPLUNK_HOME/etc/shcluster/apps on all masters, but you will still have to run splunk apply shcluster-bundle on your masters. If possible I would use a git repository for all apps. From the deployment server Clone all repos from the master git repos.
Check out my .conf presentation Nordstrom ITOps
I don't know if there any official best practices here, but I can share what we try to do. We have two clusters and a few standalone boxes. Some of our apps/TAs/etc need to be on all of those search heads.
Right now, our approach is a manual effort. We have a central location on our network where we store "gold" version of apps. And then we copy them down to shcluster/apps or etc/apps on whatever cluster/boxes need them. It's just a process. Need to update an app? Grab the gold copy and put it down on a test box. Make/test your changes. Copy back up to the central repository. Copy down to all of the relevant server. Schedule any needed cluster pushes.
You may be able to manage them with a deployment server, similar to how you can for a cluster master? Maybe? (http://docs.splunk.com/Documentation/Splunk/latest/Indexer/Updatepeerconfigurations#Use_deployment_s...). But for now, that's a bit overkill for our needs.
Watch this video and pay special attention to "Hierarchical Deployments":