Splunk Search

How to compute _indextime-_time difference average with tstats?

ctaf
Contributor

Hi,

I'd like to calculate the average latency (_indextime-_time) with the tstats command, but I can not make it work:

| tstats avg(_indextime-_time) where (index=* OR index=_*) by index

Splunk thinks "_indextime-_time" is a field name. How can I compute the difference in the tstats?

Thank you

0 Karma
1 Solution

somesoni2
Revered Legend

The tstats command doesn't support _time aggregations except for min/max. Give this version a try

| tstats count WHERE index=* OR index=_* by _time _indextime index| eval latency=abs(_indextime-_time) | stats sum(latency) as sum sum(count) as count by index| eval avg=sum/count

Update
Thanks @rjthibod for pointing the auto rounding of _time. If you've want to measure latency to rounding to 1 sec, use above version. If you want more precide, to the millisecond, use this version.

| tstats count WHERE index=* OR index=_* by _time _indextime index span=1ms | eval latency=abs(_indextime-_time) | stats sum(latency) as sum sum(count) as count by index| eval avg=sum/count

You can specify Time scale in microseconds (us), milliseconds (ms), centiseconds (cs), or deciseconds (ds) for more precision.

View solution in original post

nunoaragao
Path Finder
| tstats earliest(_time) as etime where index=* by index _indextime
| eval delta=(etime-_indextime)/60 
| eval _time=_indextime  
| timechart span=10m min(delta) by index limit=0
0 Karma

somesoni2
Revered Legend

The tstats command doesn't support _time aggregations except for min/max. Give this version a try

| tstats count WHERE index=* OR index=_* by _time _indextime index| eval latency=abs(_indextime-_time) | stats sum(latency) as sum sum(count) as count by index| eval avg=sum/count

Update
Thanks @rjthibod for pointing the auto rounding of _time. If you've want to measure latency to rounding to 1 sec, use above version. If you want more precide, to the millisecond, use this version.

| tstats count WHERE index=* OR index=_* by _time _indextime index span=1ms | eval latency=abs(_indextime-_time) | stats sum(latency) as sum sum(count) as count by index| eval avg=sum/count

You can specify Time scale in microseconds (us), milliseconds (ms), centiseconds (cs), or deciseconds (ds) for more precision.

rjthibod
Champion

I think this approach will always round _time to the closet second, hence throwing off the answers. Double-check by running this

| tstats count WHERE index=* OR index=_* by _time _indextime index | rename _time as time

rjthibod
Champion

@ctaf, you might want to reconsider. As my other comment says, this approach rounds off the _time field to the nearest second.

Here is a quick test: run the command @somesoni2 gave and then run mine . If you get two difference answers for the average, then there is a problem. The tstats approach would be faster and better if it didn't round off _time.

0 Karma

rjthibod
Champion

You cannot do that kind of eval in tstats and tstats cannot be used to get the individual events out like you would need to.

Instead, you have to do this without tstats

(index=* OR index=_*) 
| fields _time index _indextime 
| fields - _raw
| stats avg(eval(_indextime - _time)) as avg by index
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Index This | What travels the world but is also stuck in place?

April 2026 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Discover New Use Cases: Unlock Greater Value from Your Existing Splunk Data

Realizing the full potential of your Splunk investment requires more than just understanding current usage; it ...

Continue Your Journey: Join Session 2 of the Data Management and Federation Bootcamp ...

As data volumes continue to grow and environments become more distributed, managing and optimizing data ...