Splunk Search

Time Stamp in Stats table - event relevance

nowplaying
Explorer

I'm generating a stats table to count the occurrence of errors in our production app logs and presenting a top 10 errors to our engineering team daily. They would like to have a time stamp included in the table so they can determine relevance. The time stamp needs to be the time stamp of the last error message seen for each count. I'm not sure how to present this time in a stats table or if it's even possible.

The idea is if a high volume error occurred but it's not a error that continuously occurs they would like to discount it when viewing the daily report.

Hope this makes sense. If you have a better idea on how to present this data I'm all ears.

Tags (2)
1 Solution

Ayn
Legend

This should do the trick:

<yourbasesearch>
| stats count,first(_time) as "Most recent event" by errortype 
| convert ctime("Most recent event")
| sort -count
| head 10

View solution in original post

Ayn
Legend

This should do the trick:

<yourbasesearch>
| stats count,first(_time) as "Most recent event" by errortype 
| convert ctime("Most recent event")
| sort -count
| head 10

Ayn
Legend

True. Edited my answer.

0 Karma

gkanapathy
Splunk Employee
Splunk Employee

may want to add | head 10 to only show the 10 most common uri_host (or errors or whatever else you're counting by)

0 Karma
Get Updates on the Splunk Community!

Data Management Digest – December 2025

Welcome to the December edition of Data Management Digest! As we continue our journey of data innovation, the ...

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...