Getting Data In

How do Splunk forwarders handle broken receiver connections?

LBlaboon
New Member

This question is simply out of curiosity.

If a Splunk forwarder loses its connection with its receiver (assuming there is only one receiver/no load balancing), does it hang on to the data it's supposed to forward until the connection is re-established, or are the events generated during that time lost? This might not make much of a difference for monitored files, but what about the case where you have monitored program output (i.e. running xyz program once every 60 seconds)? If the program gets run while the connection to the receiver is broken, does the output get stored until the connection is re-established?

The docs mention the use of indexer acknowledgment, but that's all assuming that a connection is available. If I'm reading the docs correctly (and I might not be), indexer acknowledgment doesn't have an effect if there's no connection at all. Specifically, it says "Without load balancing, the forwarder has no way to continue sending data if its receiving node goes down." This seems to imply that if your connection to the receiver (or all the receivers in a cluster) is unavailable, then any events generated during that time will be lost.

Any info/clarification is much appreciated!

0 Karma
1 Solution

pinVie
Path Finder

Hi - this is how I think it works.

If a forwarder looses his connection to the indexer(s) it starts caching incoming events (doesn't matter how data comes in). Data is cached as long as the queues aren't full - afterwards incoming events are lost. To store queues on disk persistent queuing (http://docs.splunk.com/Splexicon:Persistentqueue) can be used.

I once had to shutdown the indexer cluster and could not afford to loose data, so I increased the size of the persistent queue to about 1 GB, stopped the clusters, did the necessary work and restartet the cluster. As far as I can tell not events were lost 🙂

HTH

View solution in original post

pinVie
Path Finder

Hi - this is how I think it works.

If a forwarder looses his connection to the indexer(s) it starts caching incoming events (doesn't matter how data comes in). Data is cached as long as the queues aren't full - afterwards incoming events are lost. To store queues on disk persistent queuing (http://docs.splunk.com/Splexicon:Persistentqueue) can be used.

I once had to shutdown the indexer cluster and could not afford to loose data, so I increased the size of the persistent queue to about 1 GB, stopped the clusters, did the necessary work and restartet the cluster. As far as I can tell not events were lost 🙂

HTH

LBlaboon
New Member

This is exactly what I was looking for.
Thanks!

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...