I have 2 heavy forwarders receiving UF logs from about 2000 windows servers. The traffic is being split to our indexers and with a route out via syslog to 2 F5 VIPs. For a specific server I see about 500k logs in 24 hours in Splunk. But on the receiving end of the syslog there are only 14 events. I'm pretty sure the HF's are overloaded and I've put in a request to have 2 more built, but I'm also wondering if there is any further tuning I can do. I am not finding anything specific to HF's online. Thanks.
It appears that you are splitting the output at your HFs. Whenever you do this, if EITHER of the outputs backs up (as it can and definitely at some point will, when using TCP), then BOTH of our destinations become blocked (because they share a single output queue). Fix whichever is blocking and then both will catch up.
Thanks. One output is UDP/514 and the other is TCP/9997 to Splunk indexers. I've been looking to see how I can performance tune the NIC but where would I see that they are being blocked? I see a bunch of reset errors on the NIC btw.
They are sending a syslog feed to a 3rd party temporarily via a syslog statement in the outputs.conf. So the HF's are sending to our indexers via regular Splunk:TCP 9997 and also routing out syslog UDP:514. I found the issue though, the HF's were just overwhelmed with too much data and I offloaded some logsources to another HF.
You cannot load-balance S2S
that way with syslog. You need NiFi
or cribl
OR DSP
to do it right.
taking the load balance out of the equation. One HF is going to one VIP and the other to a different VIP, not load balancing from the HF to the F5's. Apologies for the confusion.
I am confused by your statement that your HFs'
traffic is being split to our indexers and with a route out via syslog
to 2 F5 VIPs.
The via syslog
part makes no sense to me. HFs do not talk to indexers via syslog
; they only talk via S2S
. You must be more clear about what you are doing.