Getting Data In

How is the Splunk Heavy Forwarder used to buffer/cache until indexers come back online after getting disconnected?

daniel333
Builder

All,

I have a Splunk heavy forwarder collecting data from various endpoints, which then passes up to the Indexers. We recently had a config error that disconnected the HF from the IDX for a few hours. Some data was lost, some was not.

We have PLENTY of disk space on Heavy Forwarders and our understanding was the HF would buffer/cache until the indexers came online. This does not seem to be true. Or was there a setting I simply missed?

thanks in advance,
-Daniel

0 Karma

lguinn2
Legend

There are settings in outputs.conf for buffers and persistent queues. You need to set them on your heavy forwarder.

However, a more fundamental question is - why are you using a heavy forwarder as a collection point? There are good reasons to do that, but it also creates a possible single point of failure. Unless you have a compelling reason, perhaps you can ditch the heavy forwarder altogether and go straight to the indexers.

Or perhaps you should consider having multiple heavy forwarders, to eliminate the single point of failure problem...

0 Karma

woodcock
Esteemed Legend

Depending on how you do syslog/ports, any break in the chain drops events. I am pretty sure this is your situation.

0 Karma
Get Updates on the Splunk Community!

Splunk Observability for AI

Don’t miss out on an exciting Tech Talk on Splunk Observability for AI! Discover how Splunk’s agentic AI ...

[Puzzles] Solve, Learn, Repeat: Dereferencing XML to Fixed-length events

This challenge was first posted on Slack #puzzles channelFor a previous puzzle, I needed a set of fixed-length ...

Stay Connected: Your Guide to December Tech Talks, Office Hours, and Webinars!

What are Community Office Hours? Community Office Hours is an interactive 60-minute Zoom series where ...