Hi All,
we have onboarded windows DHCP servers on splunk cloud by installing UFs on each server. DHCP server writes logs on local log file and universal forwarder sends logs direct to splunk cloud.
The problem is logs are getting ingested to splunk with varied time difference. See the screenshot below, first log generated at 00:38 and indexed at 5:38 exact 5 hours difference where second log generated at 19:58 but indexed at 00:59 which has exact 7 hours of difference in event time (_time) and time in raw event but indexed at 00:59 and _time has been picked 00:58.
Please help to understand what can be the problem.
Thanks,
Bhaskar
And please, don't duplicate the posts.
It's the same issue as https://community.splunk.com/t5/Getting-Data-In/Latency-issue/m-p/587360#M103180
I don't know the format of windows DHCP logs but the most typical issue with such time discrepancies is when servers log the time in their own local timezone (without specifying the timezone within the timestamp) and the event is parsed according to another timezone.
https://docs.splunk.com/Documentation/Splunk/latest/Data/Configuretimestamprecognition
It's true that often Splunk is able to recognize the time format (and usually gets the timezone correctly), the important step of your inputs opitimization is configuring them with explicit time parsing settings so splunk doesn't have to "guess" the format and timezone.
So in short - make sure the servers are configured with correct format (I know it's unlikely to have your timezone off by several hours but it happens; sometimes someone might simply clone a server from master image and forget to adjust the timezone or something like that), verify that your timezones on inputs are correct.
There is also a question whether splunk is at all able to parse the timestamp from the event. If it can't, it will assume the time when the event was ingested. So if you have significant delays on input, it might affect your timestamps.
Thanks for the reply, I don't think this is something related to time zone (maybe correct me if I am wrong) because both logs are from same server. first is interpreted differently where second has different time parsing. What I understand is universal forwarder first gives the preference if raw contains the TZ info, second it consider the device TZ, third would be converting TZ based on indexer timezone settings and at last if configure TZ on UFs itself.
Thanks,
Bhaskar
No, there is no such multi-stage parsing as you describe.
https://wiki.splunk.com/Community:HowIndexingWorks
Look at the last diagram.
If your input uses indexed extractions, the timestamp is parsed at the UF, otherwise it's parsed at the first heavy component (indexer or heavy forwarder).
That's helpful, but still didn't answer for two different index time for different events from same machine
Thanks
Bhaskar
You should check your inputs and props. We don't know what are your settings regarding reading the files and parsing them. Do you use indexed extractions? Do you have properly set timestamp extraction?