My client needs High Availability in the heavy forwarders.
They are collecting events from devices on a datacenter and sending to the indexer in another datacenter.
Those events are sent through a Heavy Forwarder. So they need that HF to have HA.
Is there a way to ceate a cluster of the Heavy forwarders so in case one is down, the other starts getting the events and sending them to the indexer?
If not, how can we achieve HA in this architecture?
Thank you very much
Does the configuration changes in one heavy forwarder will also replicate the same in other heavy forwarder too while it is in cluster mode?
yes, you have to maintain the same configurations on both the Heavy Forwarders.
you could manage the HFs using the Deployment Server to be sure that both the HFs have the same configurations, this is a good solution if you have to ingest only logs from Universal Forwarders.
Instead there's a problem if you're using HFs to ingest syslogs because in this way you don't control the Splunk restart and it could happen at the same time and this isn't acceptable is you're ingesting syslogs.
In this case I hint to manually manage configurations.
to have HA on Heavy forwarders you have to use at least two HFs, then you have to configure your Universal Forwarders (if present) in auto load balancing addressing both the HFs.
If you have syslog traffic, you have to use in addition a load balancer between appliances and HFs to be sure that syslog traffic is distributed to both the HFs in normal working and to the running one in fault situation.
Use autoLB across all the HFs from the UFs.