Getting Data In

Why is the latest indexed data from our domain controllers showing 120 minutes old Windows security logs ?

pavankumarh
Path Finder

Hi,

We have the same app deployed on multiple Domain Controllers. On some DC's, the data indexed in Splunk is older by 2.5 Hours or so. Though the DC has latest entries in Event Log, they appear in Splunk (6.1.8) only after a certain duration of time.

We have tried upgrading the forwarder version, and also configuring thruput in limits.conf, but doesn't work.

Please help.

0 Karma

woodcock
Esteemed Legend

The problem is almost certainly not a genuine lag, but an interpreted one. In other words, Splunk is timestamping events 2.5 hours into the future so that you only see them 2.5 hours later. This is almost always a TZ issue (although occasionally it is due to clock drift). You can see for yourself with this search:

index=* | eval date_zone=coalesce(date_zone,"none") | eval prev_sourcetype=if(sourcetype=$_sourcetype$,"none",_sourcetype) | dedup date_zone splunk_server index host sourcetype timestamp prev_sourcetype | eval lagSecs=_time-_indextime

Anything with lagSecs<0 is a huge problem because such a thing is impossible and must be due to a bad timestamp (or interpretation thereof).

0 Karma

pavankumarh
Path Finder

Hi Woodcock.. Thank You for your response, as said by you I see that lagsecs=-1 and lagsecs=-2 for some events. To Troubleshoot the time zone issues, we tried defining time zone in the props.conf ( TZ=Asia/Calcutta) file @ \SUF\Input_app\local\ on the host. But we didn't see any improvements. Please suggest if you know the possible fix.

0 Karma

woodcock
Esteemed Legend

If lagSecs is that small (just a few seconds), then the problem is probably not TZ but rather clock drift. Are you using NTP on all of your Forwarders and Indexers and Search Heads? In any case, this is probably not your 120 minutes problem; if it was, you would have seen lagSecs have values around -7200 so I am unsure of what your original problem could be.

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...