Hey,
I have around 30 Splunk Universal Forwarders on my environment, monitoring the local Event Log (Windows Servers 2016).
Lately I noticed that a few forwarders are having a delay / sending events too slow.
I checked the traffic and noticed that once in every 20-30 seconds, the forwarder is sending around 3K events to the indexers, which is a very small amount of data, while the eventlog is creating much more events and much faster.
So the slow forwarding opened a gap of around 30 minutes in the data.
I tried to increase queues size and set thruput to unlimited. The performance of the server seems fine, no high CPU or memory usage.
I looked on another server, which currently seems to send the events on time (and has a lot more events on its eventlog, yet it is faster), and from sniffing the traffic it seems like the forwarder is sending events almost every second - no ~20 seconds interval.
I tried to forward to a different (test) environment, thinking the indexers are getting too many events from too many forwarders, but it does not seems to make any change.
Also, the Splunk universal forwarder on the servers is configured the same way, via a deploy server.
I wonder if any of you had this issue, or can think of a possible cause to the problem.
Thanks!
Hi @omerl,
Worth to take a look at https://answers.splunk.com/answers/686880/discrepancy-in-the-transfer-of-wineventlogsecurity.html if you are monitoring Windows Security Event logs.
This is a known issue. Pls raise a case with splunk support to review your env/conf. The support can suggest solution/work-around. the link above is worth checking, but notice that the use of use_old_eventlog_api = 1 will change the formatting of the windows events and the parsing/field extractions of windows events using Splunk add on for windows doesn't work very well. You can evaluate for your instance and share your findings with support
I tried to update the environment to version 7.2.3 and still no change. Trying to contact support. In the meantime - @lakshman239 You said this is a known issue, do you know anyone who had it / solve it / has suggestion regard it?
See tcpSendBufSz in the outputs.conf spec file. Ideally you should only adjust this setting if you are very familiar with TCP/IP, or you cam ask the support person you are dealing with for a recommendation
https://docs.splunk.com/Documentation/Splunk/7.2.3/Admin/Outputsconf
Setting it as suggested above made no changes
Thanks, but the suggestions in the link does not seem to help. Is there another suggestion, or maybe I should try a different tool for forwarding? I'm trying StreamSets edge data collector, and I wonder maybe you have a better tool for forwarding wineventlog. Thanks!