Getting Data In

Distributed Architecture: If data is sent to two heavy forwarders, how do you prevent duplicate logs?

Alteek
Explorer

Hi,

We are moving to a distributed architecture with 1 search head, 1 indexer and 2 heavy forwarders.

The idea is to forward logs from targets (syslog, universal forwarder) to both Heavy Forwarders.
So we ensure that logs are never lost.

But how can we avoid log duplications in the indexer in this case ? Is it handled automatically ?
Or perhaps there is a better way to do it.

Many thanks,
Regards

1 Solution

MuS
Legend

Hi Alteek,

you can use the load balancing feature of the universal forwarder, check out the docs about Configure forwarders with outputs.conf this way you can avoid event duplication.
Regarding the syslog devices; use a DNS alias or DNS round robin which referees to both heavy forwarders and use this DNS entry as syslog target.

hope this helps to get you started ...

cheers, MuS

View solution in original post

MuS
Legend

Hi Alteek,

you can use the load balancing feature of the universal forwarder, check out the docs about Configure forwarders with outputs.conf this way you can avoid event duplication.
Regarding the syslog devices; use a DNS alias or DNS round robin which referees to both heavy forwarders and use this DNS entry as syslog target.

hope this helps to get you started ...

cheers, MuS

Alteek
Explorer

Thank you for your answer.

I'll try to use the load balancing feature of the universal fwd, and I have found some intersting topics about linux heartbeat for the syslog case.

Regards

Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...