I have a clustered splunk environment and monitoring in place for quite a few application logs.
Lately , I have been encountering an issue with data collection in Splunk .
For some frame of time everyday(2 to 5 hours) , I do not see any data even though the application server has logs generated.
But for the rest of the day it works just fine .
Universal Forwarders and indexers are working just fine.
This is affecting the dashboards and alerts , as the data is been missed out .
Example log:
2020-02-13T05:01:45.249-0500 INFO 801 | UNIQ_ID=2AB2130 | TRANS_ID=00000170151fda6c-171dce8 | VERSION=18.09 | TYPE=AUDIT| UTC_ENTRY=2020-02-13T10:01:45.178Z | UTC_EXIT=2020-02-13T10:01:45.230Z,"Timestamp":"2020-02-13T10:01:45.062Z","Data":{"rsCommand":"","rsStatus":"executed","pqr":"2020-02-13T09:57:13.000Z","rsStatusReason":"executed","XYZ":"2020-02-13T09:57:29.000Z","rsMinutesRemaining":"6","remoDuration":"10","internTemperature":"12","ABC":"2020-02-13T10:00:20.000Z","Sucction"}}
Can anyone give some insight ,If you have faced or come across this kind of issue.
I suspect Splunk is getting confused with the time format of the actual event and the time and year value format inside the event likeabc,pqr,xyz timestamp in the example log above.. But doesn't help me how to go about and solve this issue.
... View more