Getting Data In

universal forwarders in cluster failover

aaronkorn
Splunk Employee
Splunk Employee

Hello,

We have two linux syslog servers setup in a cluster receiving syslog feeds. When one of the servers goes down the syslogging gets failed over to the secondary server and the agent gets automatically when failed over then stopped when failed back over to the primary. One of the issues we are having is when it fails back over to the primary it ingests duplicate data. What is the best way to handle universal forwarders in a cluster to eliminate duplicates and for the agent to continue indexing where it left off?

1 Solution

sowings
Splunk Employee
Splunk Employee

The "bookmark" for where a Splunk forwarder left off ingesting monitored files is in the fishbucket "index". If you rsync this folder ($SPLUNK_DB/fishbucket) between the forwarders, they should fail over / fail back gracefully, without too much duplication.

View solution in original post

sowings
Splunk Employee
Splunk Employee

The "bookmark" for where a Splunk forwarder left off ingesting monitored files is in the fishbucket "index". If you rsync this folder ($SPLUNK_DB/fishbucket) between the forwarders, they should fail over / fail back gracefully, without too much duplication.

Get Updates on the Splunk Community!

Splunk Smartness with Brandon Sternfield | Episode 3

Hello and welcome to another episode of "Splunk Smartness," the interview series where we explore the power of ...

Monitoring Postgres with OpenTelemetry

Behind every business-critical application, you’ll find databases. These behind-the-scenes stores power ...

Mastering Synthetic Browser Testing: Pro Tips to Keep Your Web App Running Smoothly

To start, if you're new to synthetic monitoring, I recommend exploring this synthetic monitoring overview. In ...