Getting Data In

How to troubleshoot why security events from one domain controller are getting indexed with a delay of 5 hours?

New Member

Good day,

We have one domain controller that is always about 5 hours behind in having the logs available in Splunk. This is our busiest domain controller and the security event log file is set to 1GB in size. We have already tuned the queue sizes on the heavy forwarders and indexers and all other events come in quickly, which makes us think the issue must be on the universal forwarder (latest version 6.3.2).

The output queues on the DC hovers around 200 KB/s, which makes us think that it's not working hard enough to parse the log file in time

Any suggestions?

0 Karma

Super Champion

Couple of things:

1 - Given that you are running a UF in a domain controller, make sure the SID and GUID translation happens locally:

evt_dc_name = name of your domain controller

Keep in mind Splunk will try by default to resolve those IDs using your default Global Catalog which is not great. I almost kill mine once. There's also no SID/GUID caching on the UF which will definitely improve the performance (at least not in 6.2).

2 - I know is not a solution but if the above doesn't help, have you tried disabling temporarily the SID and GUID translation?

evt_resolve_ad_obj = 0

Hope that helps.

Path Finder

I downvoted this post because it's a work around, but doesn't address the issue.

0 Karma


I am going to second what @javiergn has already stated. Please don't be so quick to downvote other users in this forum unless they've given a blatantly wrong or possibly dangerous solution or workaround that could break something in your environment.

Please be sure to check out the major points in this previous post on how etiquette works in this forum. You'll establish strong(er) networks with the right people much faster if you take a more positive approach in how you engage with other folks on Splunk Answers and other community spaces.



0 Karma

Super Champion

I disagree with you. If my answer does not contain any mistakes and it can help others, including the owner of this question, then it shouldn't be downvoted.

At least that's what happens around here. You are basically penalising people for offering solutions and workarounds.

You can always raise a Splunk support call if this answer doesn't work for you.

@ppablo_splunk can you help here please?

Get Updates on the Splunk Community!

.conf24 | Day 0

Hello Splunk Community! My name is Chris, and I'm based in Canberra, Australia's capital, and I travelled for ...

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

(view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...