Getting Data In

Missing of events and flooding of data in Heavy forwarder

DataOrg
Builder

i have 4 region of splunk server and the architecture is

Uf(data from 20 location) ---> HF >>>>indexer .... search head

so if i add any new UF which is replacing old server. i need to change props.conf to route the data into correct indexer. which requires a restart the HF . during this time there is loss of some events and flooded of data to HF once it backs online.

can you please suggest best practice to overcome of missing events and flooded of data?
i dont have clustering...

0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

Best Practice is to not use intermediate forwarders, except in special circumstances. The HF does not distribute events as evenly as the UFs would and actually makes the indexers work harder. Eliminating the HF means you don't have to restart it when you add a UF so your problem goes away.

That said, the only lost events would be those "in flight" when the HF is restarted. That can be alleviated by turning on Indexer Ack.
Flooding of data is normal when a connection is re-established. The UFs have been buffering events while HF restarted and then sends that buffer when the HF is back up.

---
If this reply helps you, Karma would be appreciated.

View solution in original post

richgalloway
SplunkTrust
SplunkTrust

Best Practice is to not use intermediate forwarders, except in special circumstances. The HF does not distribute events as evenly as the UFs would and actually makes the indexers work harder. Eliminating the HF means you don't have to restart it when you add a UF so your problem goes away.

That said, the only lost events would be those "in flight" when the HF is restarted. That can be alleviated by turning on Indexer Ack.
Flooding of data is normal when a connection is re-established. The UFs have been buffering events while HF restarted and then sends that buffer when the HF is back up.

---
If this reply helps you, Karma would be appreciated.

FrankVl
Ultra Champion

To add to that: if you are indeed stuck with using HFs as intermediate layer: put more than 1 in place and use Splunk's auto load balancing to distribute data across the 2 (or more) HFs. If one of them is then down or restarted for maintenance, the other can still process data.

DataOrg
Builder

i dont have 2 HF....we have only one HF

0 Karma

FrankVl
Ultra Champion

Sounds like you may need to add one (or more) then. It doesn't only improve availability, it also improves data distribution over your indexers.

Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Tech Talk Recap | Mastering Threat Hunting

Mastering Threat HuntingDive into the world of threat hunting, exploring the key differences between ...

Observability for AI Applications: Troubleshooting Latency

If you’re working with proprietary company data, you’re probably going to have a locally hosted LLM or many ...

Splunk AI Assistant for SPL vs. ChatGPT: Which One is Better?

In the age of AI, every tool promises to make our lives easier. From summarizing content to writing code, ...