In current design, we proposed two load balanced HFs to collect the data from 200+ end-points and pass it to next level of heavy forwarders at Splunk hosted environment.
However, with concerns around cooking of data at HF (due to parsing), we are thinking of replacing intermediate HFs with UFs as there is no planned indexing or filtering at intermediate layer.
While we proceed on this approach, can any one advice if it is possible to have two dedicated machines with load balanced UFs as intermediate layer to receive data from 200+ UFs at end-points?
These UFs are to be horizontally load balanced by adding Splunk LB feature (adding IP of both of UFs to output.conf at end-points) as below:
-- outputs.conf at UF1 - UF200
server = UF1:9997, UF2:9997
autoLB = true
autoLBFrequency = 30
Absolutely, this would be my recommendation for sure. UFs scale much better than HFs because of they way they package data for the indexers. See here:
@pranitprakash, don't forget to "√Accept" the answer if this answered your question (to award woodcock karma points)
Also, make sure that you have at least twice as many intermediary forwarders as you have indexers, so you don't negatively affect your event distribution.
You can achieve this by configuring parallel ingestion pipelines on your intermediary forwarders. If you have enough cores, you can increase pipeline sets to a higher number. The intermediary forwarder will then have connections to multiple indexers vs. just one at a time, improving your event distribution across your indexing tier.
Best practice for sure is to NOT have any intermediary forwarders at all, and for sure not multiple layers of intermediary forwarders. I understand that network connectivity or restrictive firewall policies don't always make that easy or even possible, but understand that every intermediary forwarder tier introduces challenges that need to be properly managed. Keep it simple! 😉
@ssievert, is parallel ingestion pipelines supported on universal forwarders ? According to the documentation, I was under the impression it is not.