Good day,
We have one domain controller that is always about 5 hours behind in having the logs available in Splunk. This is our busiest domain controller and the security event log file is set to 1GB in size. We have already tuned the queue sizes on the heavy forwarders and indexers and all other events come in quickly, which makes us think the issue must be on the universal forwarder (latest version 6.3.2).
The output queues on the DC hovers around 200 KB/s, which makes us think that it's not working hard enough to parse the log file in time
Any suggestions?
Couple of things:
1 - Given that you are running a UF in a domain controller, make sure the SID and GUID translation happens locally:
evt_dc_name = name of your domain controller
Keep in mind Splunk will try by default to resolve those IDs using your default Global Catalog which is not great. I almost kill mine once. There's also no SID/GUID caching on the UF which will definitely improve the performance (at least not in 6.2).
2 - I know is not a solution but if the above doesn't help, have you tried disabling temporarily the SID and GUID translation?
evt_resolve_ad_obj = 0
Hope that helps.
I downvoted this post because it's a work around, but doesn't address the issue.
I am going to second what @javiergn has already stated. Please don't be so quick to downvote other users in this forum unless they've given a blatantly wrong or possibly dangerous solution or workaround that could break something in your environment.
Please be sure to check out the major points in this previous post on how etiquette works in this forum. You'll establish strong(er) networks with the right people much faster if you take a more positive approach in how you engage with other folks on Splunk Answers and other community spaces.
https://answers.splunk.com/answers/244111/proper-etiquette-and-timing-for-voting-here-on-ans.html
Cheers
Patrick
I disagree with you. If my answer does not contain any mistakes and it can help others, including the owner of this question, then it shouldn't be downvoted.
At least that's what happens around here. You are basically penalising people for offering solutions and workarounds.
You can always raise a Splunk support call if this answer doesn't work for you.
@ppablo_splunk can you help here please?