Hey,
I have around 30 Splunk Universal Forwarders on my environment, monitoring the local Event Log (Windows Servers 2016).
Lately I noticed that a few forwarders are having a delay / sending events too slow.
I checked the traffic and noticed that once in every 20-30 seconds, the forwarder is sending around 3K events to the indexers, which is a very small amount of data, while the eventlog is creating much more events and much faster.
So the slow forwarding opened a gap of around 30 minutes in the data.
I tried to increase queues size and set thruput to unlimited. The performance of the server seems fine, no high CPU or memory usage.
I looked on another server, which currently seems to send the events on time (and has a lot more events on its eventlog, yet it is faster), and from sniffing the traffic it seems like the forwarder is sending events almost every second - no ~20 seconds interval.
I tried to forward to a different (test) environment, thinking the indexers are getting too many events from too many forwarders, but it does not seems to make any change.
Also, the Splunk universal forwarder on the servers is configured the same way, via a deploy server.
I wonder if any of you had this issue, or can think of a possible cause to the problem.
Thanks!
... View more