The deal is that I have 2 forwarders that have exactly the same logs (I'm using 2 forwarders not to have a SPOF) and I want to find a solution to not have duplicated logs. I thought of using a load balancer but I just want to know first if there is some config on Splunk that allows to do that please.
Thank you very much for your answer 🙂
to answer your question, I have several linux machines that forward their logs to 2 different universal forwarders using syslog, that's why I have the same logs twice. this choice is kind of questionable I agree :/, but there is no specific reason for choosing to forward logs to one universal forwarder than using a universal forwarder on every linux machine
as @PickleRick said, there isn't any configuration for avoid duplications.
If you're speaking of network or HEC logs and you cannot use a Load Balancer, you could configure your DNS to distribute logs to both the Forwarders and manage fails.
Here you can find how to do it https://docs.microsoft.com/en-us/windows-server/networking/dns/deploy/app-lb
No. Splunk on its own does not do any form of deduplication. It's up to you to provide the input data in the form you need.
I must say however that I don't quite understand what do you mean by "2 forwarders with the same logs". Some network share and two separate clients mounting it? Why don't you then ingest the logs simply from the source machine?