Getting Data In

Missing of events and flooding of data in Heavy forwarder

DataOrg
Builder

i have 4 region of splunk server and the architecture is

Uf(data from 20 location) ---> HF >>>>indexer .... search head

so if i add any new UF which is replacing old server. i need to change props.conf to route the data into correct indexer. which requires a restart the HF . during this time there is loss of some events and flooded of data to HF once it backs online.

can you please suggest best practice to overcome of missing events and flooded of data?
i dont have clustering...

0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

Best Practice is to not use intermediate forwarders, except in special circumstances. The HF does not distribute events as evenly as the UFs would and actually makes the indexers work harder. Eliminating the HF means you don't have to restart it when you add a UF so your problem goes away.

That said, the only lost events would be those "in flight" when the HF is restarted. That can be alleviated by turning on Indexer Ack.
Flooding of data is normal when a connection is re-established. The UFs have been buffering events while HF restarted and then sends that buffer when the HF is back up.

---
If this reply helps you, Karma would be appreciated.

View solution in original post

richgalloway
SplunkTrust
SplunkTrust

Best Practice is to not use intermediate forwarders, except in special circumstances. The HF does not distribute events as evenly as the UFs would and actually makes the indexers work harder. Eliminating the HF means you don't have to restart it when you add a UF so your problem goes away.

That said, the only lost events would be those "in flight" when the HF is restarted. That can be alleviated by turning on Indexer Ack.
Flooding of data is normal when a connection is re-established. The UFs have been buffering events while HF restarted and then sends that buffer when the HF is back up.

---
If this reply helps you, Karma would be appreciated.

FrankVl
Ultra Champion

To add to that: if you are indeed stuck with using HFs as intermediate layer: put more than 1 in place and use Splunk's auto load balancing to distribute data across the 2 (or more) HFs. If one of them is then down or restarted for maintenance, the other can still process data.

DataOrg
Builder

i dont have 2 HF....we have only one HF

0 Karma

FrankVl
Ultra Champion

Sounds like you may need to add one (or more) then. It doesn't only improve availability, it also improves data distribution over your indexers.

Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...