Hi,
To implement high availability for the Splunk Search Head Deployer across multiple regions, would it be possible to place a load balancer in front of the two deployers and configure the conf_deploy_fetch_url setting in the server.conf file on all Search Head Cluster members to point to the load balancer’s VIP? Would this approach work on below scenarios? Please advise.
When does the deployer distribute configurations to the members?
The deployer distributes configurations to the cluster members under these circumstances:
1) When you invoke the splunk apply shcluster-bundle command, the deployer pushes any new or changed configurations to the members. See Deploy a configuration bundle.
2) When a member joins or rejoins the cluster, it checks the deployer for configuration updates. A member also checks for updates whenever it restarts. If any updates are available, it pulls them from the deployer.
server.conf
[shclustering]
conf_deploy_fetch_url = <URL>:<management_port>
I tested this scenario in a lab environment using 3 Search Head Cluster (SHC) members and 2 Search Head Deployers positioned behind a network load balancer. The setup was configured in an Active–Passive mode, with one deployer designated as Active and the other as Standby. The SHC members were configured with the deployer URI set to the Load Balancer VIP (conf_deploy_fetch_url = LB VIP:8089). I staged an application on the Active Deployer and pushed the bundle to a target cluster member, which successfully replicated to all three SHC members. I also tested scenarios where a new member was added or an existing member rejoined the cluster, and in both cases, the applications were deployed without issues. Based on these results, I believe that a load balancer or DNS with an Active–Passive configuration is a viable approach for implementing Search Head Deployer high availability.
As already said, you don't need to have a HA SHC Deployer as it's not needed continuously on operations. Don' t do overcomplicated environment which creates additional issues sooner or later.
BUT I have had situation when Deployer have been down several days/weeks and then it rises issues. So the only thing what you need to do is ensure that you can redeploy Deployer if need within couple of hours.
Even you can create some weird configuration and get those to work, don't do it if it leads to unsupported configuration! You or someone after you will be find it when it hits somehow!
Hi @srek3502 ,
as the others already said, You don't need to have HA on the Deployer, because it is used only for the first deploy of apps and updates, not for running, even if you have it in many regions.
Ciao.
Giuseppe
I haven’t personally set up HA for the Search Head Deployer, but as far as I know, Splunk doesn’t support having multiple deployers or load balancing between them(It will be interesting if you could test it)
I suggest using system-level HA options, like DNS-based failover or VM failover(if your infra. supports)
Regards,
Prewin
Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Don’t put a load balancer in front of two search head deployers. An SHC is designed to have one deployer per cluster, and members should point conf_deploy_fetch_url at that single deployer. Running multiple active deployers (even behind a VIP) is unsupported and risks conflicting bundles.
Each search head cluster needs one deployer, which lives outside the cluster and distributes bundles.
If high availability is critical:
Use tools like Ansible or scripts to switch to a standby deployer and update conf_deploy_fetch_url if needed.
You can use a single deployer for multiple SHCs only if all clusters have identical apps and configurations and share the same secret.