Getting Data In

universal forwarders in cluster failover

aaronkorn
Splunk Employee
Splunk Employee

Hello,

We have two linux syslog servers setup in a cluster receiving syslog feeds. When one of the servers goes down the syslogging gets failed over to the secondary server and the agent gets automatically when failed over then stopped when failed back over to the primary. One of the issues we are having is when it fails back over to the primary it ingests duplicate data. What is the best way to handle universal forwarders in a cluster to eliminate duplicates and for the agent to continue indexing where it left off?

1 Solution

sowings
Splunk Employee
Splunk Employee

The "bookmark" for where a Splunk forwarder left off ingesting monitored files is in the fishbucket "index". If you rsync this folder ($SPLUNK_DB/fishbucket) between the forwarders, they should fail over / fail back gracefully, without too much duplication.

View solution in original post

sowings
Splunk Employee
Splunk Employee

The "bookmark" for where a Splunk forwarder left off ingesting monitored files is in the fishbucket "index". If you rsync this folder ($SPLUNK_DB/fishbucket) between the forwarders, they should fail over / fail back gracefully, without too much duplication.

Get Updates on the Splunk Community!

Automatic Discovery Part 1: What is Automatic Discovery in Splunk Observability Cloud ...

If you’ve ever deployed a new database cluster, spun up a caching layer, or added a load balancer, you know it ...

Real-Time Fraud Detection: How Splunk Dashboards Protect Financial Institutions

Financial fraud isn't slowing down. If anything, it's getting more sophisticated. Account takeovers, credit ...

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...