We have one domain controller that is always about 5 hours behind in having the logs available in Splunk. This is our busiest domain controller and the security event log file is set to 1GB in size. We have already tuned the queue sizes on the heavy forwarders and indexers and all other events come in quickly, which makes us think the issue must be on the universal forwarder (latest version 6.3.2).
The output queues on the DC hovers around 200 KB/s, which makes us think that it's not working hard enough to parse the log file in time
1 - Given that you are running a UF in a domain controller, make sure the SID and GUID translation happens locally:
evt_dc_name = name of your domain controller
Keep in mind Splunk will try by default to resolve those IDs using your default Global Catalog which is not great. I almost kill mine once. There's also no SID/GUID caching on the UF which will definitely improve the performance (at least not in 6.2).
2 - I know is not a solution but if the above doesn't help, have you tried disabling temporarily the SID and GUID translation?
I am going to second what @javiergn has already stated. Please don't be so quick to downvote other users in this forum unless they've given a blatantly wrong or possibly dangerous solution or workaround that could break something in your environment.