Splunk Search

Time Stamp in Stats table - event relevance

nowplaying
Explorer

I'm generating a stats table to count the occurrence of errors in our production app logs and presenting a top 10 errors to our engineering team daily. They would like to have a time stamp included in the table so they can determine relevance. The time stamp needs to be the time stamp of the last error message seen for each count. I'm not sure how to present this time in a stats table or if it's even possible.

The idea is if a high volume error occurred but it's not a error that continuously occurs they would like to discount it when viewing the daily report.

Hope this makes sense. If you have a better idea on how to present this data I'm all ears.

Tags (2)
1 Solution

Ayn
Legend

This should do the trick:

<yourbasesearch>
| stats count,first(_time) as "Most recent event" by errortype 
| convert ctime("Most recent event")
| sort -count
| head 10

View solution in original post

Ayn
Legend

This should do the trick:

<yourbasesearch>
| stats count,first(_time) as "Most recent event" by errortype 
| convert ctime("Most recent event")
| sort -count
| head 10

Ayn
Legend

True. Edited my answer.

0 Karma

gkanapathy
Splunk Employee
Splunk Employee

may want to add | head 10 to only show the 10 most common uri_host (or errors or whatever else you're counting by)

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...