Getting Data In

Distributed Architecture: If data is sent to two heavy forwarders, how do you prevent duplicate logs?

Alteek
Explorer

Hi,

We are moving to a distributed architecture with 1 search head, 1 indexer and 2 heavy forwarders.

The idea is to forward logs from targets (syslog, universal forwarder) to both Heavy Forwarders.
So we ensure that logs are never lost.

But how can we avoid log duplications in the indexer in this case ? Is it handled automatically ?
Or perhaps there is a better way to do it.

Many thanks,
Regards

1 Solution

MuS
SplunkTrust
SplunkTrust

Hi Alteek,

you can use the load balancing feature of the universal forwarder, check out the docs about Configure forwarders with outputs.conf this way you can avoid event duplication.
Regarding the syslog devices; use a DNS alias or DNS round robin which referees to both heavy forwarders and use this DNS entry as syslog target.

hope this helps to get you started ...

cheers, MuS

View solution in original post

MuS
SplunkTrust
SplunkTrust

Hi Alteek,

you can use the load balancing feature of the universal forwarder, check out the docs about Configure forwarders with outputs.conf this way you can avoid event duplication.
Regarding the syslog devices; use a DNS alias or DNS round robin which referees to both heavy forwarders and use this DNS entry as syslog target.

hope this helps to get you started ...

cheers, MuS

Alteek
Explorer

Thank you for your answer.

I'll try to use the load balancing feature of the universal fwd, and I have found some intersting topics about linux heartbeat for the syslog case.

Regards

Get Updates on the Splunk Community!

Splunk Observability for AI

Don’t miss out on an exciting Tech Talk on Splunk Observability for AI!Discover how Splunk’s agentic AI ...

Splunk Enterprise Security 8.x: The Essential Upgrade for Threat Detection, ...

Watch On Demand the Tech Talk on November 6 at 11AM PT, and empower your SOC to reach new heights! Duration: ...

Splunk Observability as Code: From Zero to Dashboard

For the details on what Self-Service Observability and Observability as Code is, we have some awesome content ...