Installation

Index and sourcetype delay/lag for last 30 days

harishsplunk7
Explorer

can you please suggest query to pull all the index and sourcetype lag/delay for last 30 days

0 Karma

harishsplunk7
Explorer

we see a delay of over five hours in indexing. Is there a way to find out where these events "got stuck" or please let me know query to get the how much time log delay

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

How do you know it is 5 hours? Is it always about 5 hours from all hosts? Or for all sourcetypes? Can you isolate a common attribute for all the events which are "delayed"? Which time zone or zones are you operating in?

0 Karma

harishsplunk7
Explorer

I have given the example of 5 hours span time log delay in last 30 days.. its not specific to particular sourcetype. I want to see if any log delay in splunk. 

Tags (1)
0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

How would you measure that delay? If it is the difference between the time the event was indexed and the timestamp as stored in _time with the event, you need to look at how that timestamp was derived, which will depend on the sourcetype, where the logged event was picked up and its journey into Splunk.

For example, if I have a log entry which has a timestamp created in one timezone, and the timezone is not part of the timestamp, when it is parsed into _time, it may not be given the timezone I was expecting, and can therefore appear to be hours late (or even early if the timezones are reversed).

Also, depending on the application logging the event, the timestamp in the log may not be the same as the time it was written. For example, Apache HTTPD will log events with a timestamp relating to when the request was received, but will only log it when the response has been sent. Admittedly, 5 hours would be a bit extreme in this case, but the point is that the timestamp that Splunk assigns to the event does not necessarily have to be coincident with the time the log event was written to the log.

This is why you should try and determine if there is any pattern to the apparent delay.

You could start by comparing the _indextime to the _time field in your events and see where the large differences are (there is always likely to be a difference because the log has to get from where it is written to the index and that always takes some time.

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

One way to look that what @ITWhisperer told is just query like this

index=*
| eval iDiff=_indextime - _time
| bin span=1h _time
| stats avg(iDiff) as iDiff by _time host index sourcetype
| where iDiff > 60*60*5
| fieldformat iDiff = tostring(round(iDiff),"duration")

You need just adjust those parameters by your needs and start to dig out is there any issues as lag or is there real issue in onboarding process where your time stampping is not correct. Or could there even be some timezone or other clock issues.

r. Ismo

0 Karma

harishsplunk7
Explorer

This answer is not relevant to my question.  I am looking for the splunk query to get the index and sourcetype delay. very simple.. 

0 Karma

isoutamo
SplunkTrust
SplunkTrust
You need to understand the content of issue! Without that understanding those SPLs don't tell you what is the real issue!
0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

How are you measuring lag/delay?

0 Karma

harishsplunk7
Explorer

I am measuring the lag diff=_time - event_time

0 Karma

PickleRick
SplunkTrust
SplunkTrust

1. What do you mean by event_time?

2. What is _time assigned from in your sourcetypes?

3. Are your sources properly configured (time synchronized, properly set timezones)?

Generally speaking - are your sources properly onboarded or afe you just ingesting "something"?

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Usually _time is the event time. Then there is also _index_time which is tie when event is indexed. Usually when we are talking about lag it is _index_time - event time (_time).

If/when your _time is something else than real event time, then you have some issues (usually several) on your onboarding process as @PickleRick said.

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...