Hi folks,
I'm having a hard time picking the right architecture for setting up a solution to gain high availability of my syslog inputs.
My current setup is:
- 4 UFs
- 2 HFs
- Splunk Cloud
Syslog is now being ingested on one of the HFs as a network input. I saw that to solve my isssue I could injest my syslog logs on a UF and forward them to my HFs taking advantage of the built-in load balancing of the intermediate forwarders (aka HFs) which would simplify a lot the deployment.
On the other hand another seen solution is manually implementing a load balancing machine in front of the HFs to injest the syslog data and manually balance load.
Which solution is best suited for a splunk development? IMO 1st one is much more straight forward but I need to validate this is a correct aproach.
Thanks in advanced!
Hi @PolarBear01 ,
the only way to have HA at Forwarders level is to have two or more Receivers (rsyslog or syslog-ng or SC4S) , so your receiver will work even if Splunk is down;
with a Load Balancer that distributes syslogs between them and manages fail over.
Receivers can be located on UFs or on Hfs, I usually use rsyslog on UFs!
I don't know what you mean with manual balancing, for a real HA, you need a Load Balancer that works without any manual action.
There's also the possibility to configure DNS for load balancing and fail over managing, but DNS usually responds with a delay in case of fault of one receiver, so you loose first logs, for this reason a real Load balancer (e.g. F5) is the best solution for a real HA.
The HFs are useful if you want to concentrate all logs before to send them to Splunk Cloud, otherwise (on premise) it isn't mandatory.
Ciao.
Giuseppe
Hi
as you have SCP in you, you have one additional option. You could use Splunk Edge Processor to get syslog feed in. Of course you need LB before those endpoint to get HA. But probably the easiest way is use SC4S as @gcusello said. You could run it on docker or even k8s if you are familiar with it.
r. Ismo