Getting Data In

How is the Splunk Heavy Forwarder used to buffer/cache until indexers come back online after getting disconnected?

daniel333
Builder

All,

I have a Splunk heavy forwarder collecting data from various endpoints, which then passes up to the Indexers. We recently had a config error that disconnected the HF from the IDX for a few hours. Some data was lost, some was not.

We have PLENTY of disk space on Heavy Forwarders and our understanding was the HF would buffer/cache until the indexers came online. This does not seem to be true. Or was there a setting I simply missed?

thanks in advance,
-Daniel

0 Karma

lguinn2
Legend

There are settings in outputs.conf for buffers and persistent queues. You need to set them on your heavy forwarder.

However, a more fundamental question is - why are you using a heavy forwarder as a collection point? There are good reasons to do that, but it also creates a possible single point of failure. Unless you have a compelling reason, perhaps you can ditch the heavy forwarder altogether and go straight to the indexers.

Or perhaps you should consider having multiple heavy forwarders, to eliminate the single point of failure problem...

0 Karma

woodcock
Esteemed Legend

Depending on how you do syslog/ports, any break in the chain drops events. I am pretty sure this is your situation.

0 Karma
Get Updates on the Splunk Community!

The OpenTelemetry Certified Associate (OTCA) Exam

What’s this OTCA exam? The Linux Foundation offers the OpenTelemetry Certified Associate (OTCA) credential to ...

From Manual to Agentic: Level Up Your SOC at Cisco Live

Welcome to the Era of the Agentic SOC   Are you tired of being a manual alert responder? The security ...

Splunk Classroom Chronicles: Training Tales and Testimonials (Episode 4)

Welcome back to Splunk Classroom Chronicles, our ongoing series where we shine a light on what really happens ...