I have a clustered splunk environment and monitoring in place for quite a few application logs. Lately , I have been encountering an issue with data collection in Splunk .
For some frame of time everyday(2 to 5 hours) , I do not see any data even though the application server has logs generated. But for the rest of the day it works just fine .
Universal Forwarders and indexers are working just fine. This is affecting the dashboards and alerts , as the data is been missed out .
Example log: 2020-02-13T05:01:45.249-0500 INFO 801 | UNIQ_ID=20200213050500000170151fda6c-171dcee | TRANS_ID=000001da6c-171dce8 | VERSION=1.09 | TYPE=AUDIT | INTERNAL_ERROR_MSG= | UTC_ENTRY=2020-02-13T10:05.178Z | UTC_EXIT=2020-02-13T10:01:45.230Z,"Timestamp":"2020-02-13T10:01:45.062Z","Organization":"abc","Region":"RStS","ApplicationName":"Anoid"},"Data":{"rsCommand":"Clization","rsStatus":"executed","statusTimeStamp":"2020-02-13T09:57:13.000Z","rsStatusReason":"executed","lastRemoTimeStamp":"2020-02-13T09:57:29.000Z","rsMinutesRemaining":"6","remoDuration":"10","interTemperature":"12","interTimeStamp":"2020-02-13T10:00:20.000Z","Successful Execution"}}
Can anyone give some insight ,If you have faced or come across this kind of issue. I suspect Splunk is getting confused with the time format of the actual event and the time and year value format inside the event like status time stamp , last remo timestamp in the example log above.. But doesn't help me how to go about and solve this issue.
... View more