Getting Data In

Missing of events and flooding of data in Heavy forwarder

DataOrg
Builder

i have 4 region of splunk server and the architecture is

Uf(data from 20 location) ---> HF >>>>indexer .... search head

so if i add any new UF which is replacing old server. i need to change props.conf to route the data into correct indexer. which requires a restart the HF . during this time there is loss of some events and flooded of data to HF once it backs online.

can you please suggest best practice to overcome of missing events and flooded of data?
i dont have clustering...

0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

Best Practice is to not use intermediate forwarders, except in special circumstances. The HF does not distribute events as evenly as the UFs would and actually makes the indexers work harder. Eliminating the HF means you don't have to restart it when you add a UF so your problem goes away.

That said, the only lost events would be those "in flight" when the HF is restarted. That can be alleviated by turning on Indexer Ack.
Flooding of data is normal when a connection is re-established. The UFs have been buffering events while HF restarted and then sends that buffer when the HF is back up.

---
If this reply helps you, Karma would be appreciated.

View solution in original post

richgalloway
SplunkTrust
SplunkTrust

Best Practice is to not use intermediate forwarders, except in special circumstances. The HF does not distribute events as evenly as the UFs would and actually makes the indexers work harder. Eliminating the HF means you don't have to restart it when you add a UF so your problem goes away.

That said, the only lost events would be those "in flight" when the HF is restarted. That can be alleviated by turning on Indexer Ack.
Flooding of data is normal when a connection is re-established. The UFs have been buffering events while HF restarted and then sends that buffer when the HF is back up.

---
If this reply helps you, Karma would be appreciated.

FrankVl
Ultra Champion

To add to that: if you are indeed stuck with using HFs as intermediate layer: put more than 1 in place and use Splunk's auto load balancing to distribute data across the 2 (or more) HFs. If one of them is then down or restarted for maintenance, the other can still process data.

DataOrg
Builder

i dont have 2 HF....we have only one HF

0 Karma

FrankVl
Ultra Champion

Sounds like you may need to add one (or more) then. It doesn't only improve availability, it also improves data distribution over your indexers.

Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In September, the Splunk Threat Research Team had two releases of new security content via the Enterprise ...

New in Observability - Improvements to Custom Metrics SLOs, Log Observer Connect & ...

The latest enhancements to the Splunk observability portfolio deliver improved SLO management accuracy, better ...

Improve Data Pipelines Using Splunk Data Management

  Register Now   This Tech Talk will explore the pipeline management offerings Edge Processor and Ingest ...