Getting Data In

High Availability for Heavy forwarder configuration?

New Member

Current setup of Splunk Instance is 10 UF---->2HF---->3IDX,
In HF for load balance we go with config of autoLB with frequency. has anyone can help for failover or HA mode at HF configuration? it will be work with BCP if one of the HF hardware or path fails?

UF1------->} HF1

0 Karma

Ultra Champion

hello there,
@jplumsdaine22 comments are very valid, and i recommend to follow his lead.
straight out, there is no real HA configuration for HF.
with that being said, you can achieve some sort of continuance / semi HA configuration with your architecture by configuring the Universal Forwarders to auto load balance to both HF. now if one HF is down, the Universal Forwarders will send data to the HF that is still up.
outputs on UF has to have both HF on all UF, in contrary to your diagram in the question
hope it helps

0 Karma


Is there a particular reason you're using the HFs at all? It is best practice to have NO intermediate tier between the UFs and Indexers.

0 Karma


The HFs would be needed depending on the size of the environment, for example having 1000 servers all sending directly to the Indexer would not be a great idea as this would impact on performance, especially if the Indexer is also being used as a Search Head / Deployment Server / License Master and so on. In a smaller setup yes that would be best practice.

Having a HF installed on a system for example to split up sites so maybe a London site and a Birmingham site, in that situation you would have HFs installed and I could then see a requirement for HA, I guess in this situation you would configure the UF to send logs to both HFs for that one site and if one goes down the logs still are being routed via the other. if that site went down then you would have no logging.

The problem you then have is that you would potentially be doubling up on the logs being sent so some duplication would then take your license up, I don't know so much about that side of things but i'm sure there is a way of dropping duplicate logs.

0 Karma

New Member

yes, its needed for to filter and routing few event logs before indexer.

0 Karma


You probably know already , but just in case you don't: All that filtering & routing can probably all be done without the HFs. I collect data from thousands of windows UFs without any heavy forwarders - filtering is all done at the index layer.

There's a good blog post explaining why HFs are not so great here:

Get Updates on the Splunk Community!

.conf24 | Day 0

Hello Splunk Community! My name is Chris, and I'm based in Canberra, Australia's capital, and I travelled for ...

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

(view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...