Hello,
We have two linux syslog servers setup in a cluster receiving syslog feeds. When one of the servers goes down the syslogging gets failed over to the secondary server and the agent gets automatically when failed over then stopped when failed back over to the primary. One of the issues we are having is when it fails back over to the primary it ingests duplicate data. What is the best way to handle universal forwarders in a cluster to eliminate duplicates and for the agent to continue indexing where it left off?
The "bookmark" for where a Splunk forwarder left off ingesting monitored files is in the fishbucket "index". If you rsync this folder ($SPLUNK_DB/fishbucket) between the forwarders, they should fail over / fail back gracefully, without too much duplication.
The "bookmark" for where a Splunk forwarder left off ingesting monitored files is in the fishbucket "index". If you rsync this folder ($SPLUNK_DB/fishbucket) between the forwarders, they should fail over / fail back gracefully, without too much duplication.