One of our top customers using our add on app is facing issue related to delay in the indexing of the events.
We can reproduce the issue in our local setup as well.
The delay is between 170,000 secs to 250,000 secs (2-3 days).
We are using following expression for getting specific events:
index="druva"
| eval inSyncDataSourceName=upper(inSyncDataSourceName)
| eval Epoch_Last_Backed_Up=strptime(Last_Backed_Up, "%b %d %Y %H:%M")
| eval Days=round (((_time - Epoch_Last_Backed_Up)/86400),0)
| eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M")
| eval lag_sec=_indextime-_time
| table timestamp _time indextime lag_sec severity event_type Alert Alert_Description Last_Backed_Up Days eventDetails clientOS clientVersion inSyncDataSourceName inSyncUserEmail inSyncUserName profileName
I tried the steps mentioned here -https://docs.splunk.com/Documentation/Splunk/9.0.0/Troubleshooting/Troubleshootingeventsindexingdela...
And setting the maxKBps to zero but the issue still persists.
Could you please suggest how can we address this issue.
Appreciate your inputs.
Hi
when the delay is so huge, I suppose that issue could be something else than just delivering events from source to indexers. Have you already check that nodes have correct time and TZ definitions? Also you should check that event’s time has extracted correctly.
Is there any places where those events could spooled after read by UF? Or how those are collected?
As you said that you could reproduce that in your environment, then it’s even more probably that time extraction didn’t work correctly or events have wrong time stamp.
r. Ismo