Is there any advantage to sending data from UFs to an intermediate HF instead of directly to indexers?
I recall reading that by relaying data UF > HF > indexer, there are certain advantages (e.g. running scripts on the HF ) before indexing, but in regards to DR, will the HF store events for delayed transmission if the indexers go down or the connection to the indexers goes down?
I know there is a link on this topic but unfortunately I cannot find it.
Let's distinguish "intermediary forwarder" from heavy forwarder. You can easily use a UF to serve as an intermediary forwarder, if you don't have requirements that mandate a heavy forwarder. And that's what you should use, if no such requirements exist, because the UF has a much lower resource impact, scales better and has less overhead on the wire.
As a rule of thumb, filtering events before they go over the wire is only going to provide a net benefit if a large number of events are subject to be filtered (~40-50%). Otherwise, do your filtering on the indexers and take advantage of the more efficient wire protocol the UF has.
Personally I think that there are more disadvantages and pitfalls with any architecture that contains intermediary forwarding tiers than there are advantages. The biggest issue that often occurs is that intermediary forwarding tiers are not properly sized to not impact event distribution of 100s or 1000s of endpoint forwarders across indexers. You want to have at least a 2x intermediary forwarding pipelines to indexer ratio to ensure that most indexers receive data at any given time.
Troubleshooting also gets more complicated and you have another tier that you need to manage configurations on.
Sometimes you have to have an intermediary forwarding tier, for example if network restrictions don't permit connections from forwarders directly to indexers, or if you need to do selective forwarding to third-party systems.
If you can avoid them, you are almost always better off and end up with an environment that has less event distribution issues, lower data ingest latency and is easier to manage and provides you with a better TCO overall, since you need fewer servers to support your architecture.
I hope this helps!
Thank you as well for your comments, especially reiterating that point about filter events. I had heard similarly in training that unless 80%< are going to be filtered out use a light UF.
In terms of the major differences, a UF can run scripts but it cannot run full Splunk applications, for example Splunk DB connect.
Sending data from a UF to a HF allows data to be dropped/manipulated before it progresses to the indexing tier, a HF can also be setup for indexing and forwarding.
In terms of disaster recovery, if the indexing tier is down, then it's simply a matter of time before the upstream universal forwarders/heavy forwarders block their queues, if your using indexer acknowledgement then you will prevent the in flight data loss and the block will occur quickly. A heavy forwarder tier in my opinion isn't going to help with DR.
I think the answer is going to depend on your scenario, if you have control of your universal forwarders, and there is no requirement to drop data as it progresses through the network I would definitely not add in a layer of universal or heavy forwarders.
If you require the data to be manipulated, you may wish to offload the work to heavy forwarders rather than do this on the indexing tier, again this depends on your environment.
If you require the data to be dropped at a particular point in the network before it progresses to the indexing tier than a heavy forwarder definitely makes sense!
More opinions welcome!
Actually you have not clicked the up-vote on this particular post 🙂
I do find this topic interesting, so I'm interested in other opinions too