Splunk Search

How to get the timestamp of each peak that occurs in a sparkline?

Contributor

Does anyone know how to get a timestamp of the peak(s) that occur in a sparkline? The idea is that I have multiple users in a chart that has UID, sparkline, count. I have an overlay that puts a bubble on the peak, but I don't know how to get the timestamp of that peak, or if there are multiple peaks how to get all of the timestamps.

Any thoughts?

1 Solution

SplunkTrust
SplunkTrust

Say your search right now looks like

<your search terms> | stats sparkline count by user

which is a nice simple sparkline - just the count of events over time for each user.

You can also pull out the peak for each user with some search language. Doing this right will technically require that we specify the granularity, but that's probably not a problem. Here I've used 5minutes as the granularity so you'll probably have to change that to suit.

<your search terms> 
| bin _time span=5min 
| streamstats count as per_user_count by _time user 
| sort 0 - user per_user_count 
| sort 0 - _time 
| eventstats first(_time) as peakTime max(per_user_count) as peakCount by user 
| fields - per_user_count 
| stats sparkline count last(peakCount) as peakCount last(peakTime) as peakTime by user

Breaking it down, the bin command rounds all of the timestamps down to the nearest 5 minutes. the streamstats considers the data set separately for each combination of user and _time, and for each such combination, it will count the number of events. Then the sort command will sort first by user, and then within that, by per_user_count, descending. Then there's another sort command to restore the initial time order (this may be unnecessary depending on your use case) .
Then an eventstats command considers the data for each user (disregarding _time now), and looking at all of the timebuckets for that user, it gets the sparkline but also the first value seen for peakTime and peakCount. Because of the sorting, these will be the peak count, and the timestamp of the peak count respectively (or at least, the timestamp rounded down to nearest 5 minutes).

View solution in original post

SplunkTrust
SplunkTrust

Say your search right now looks like

<your search terms> | stats sparkline count by user

which is a nice simple sparkline - just the count of events over time for each user.

You can also pull out the peak for each user with some search language. Doing this right will technically require that we specify the granularity, but that's probably not a problem. Here I've used 5minutes as the granularity so you'll probably have to change that to suit.

<your search terms> 
| bin _time span=5min 
| streamstats count as per_user_count by _time user 
| sort 0 - user per_user_count 
| sort 0 - _time 
| eventstats first(_time) as peakTime max(per_user_count) as peakCount by user 
| fields - per_user_count 
| stats sparkline count last(peakCount) as peakCount last(peakTime) as peakTime by user

Breaking it down, the bin command rounds all of the timestamps down to the nearest 5 minutes. the streamstats considers the data set separately for each combination of user and _time, and for each such combination, it will count the number of events. Then the sort command will sort first by user, and then within that, by per_user_count, descending. Then there's another sort command to restore the initial time order (this may be unnecessary depending on your use case) .
Then an eventstats command considers the data for each user (disregarding _time now), and looking at all of the timebuckets for that user, it gets the sparkline but also the first value seen for peakTime and peakCount. Because of the sorting, these will be the peak count, and the timestamp of the peak count respectively (or at least, the timestamp rounded down to nearest 5 minutes).

View solution in original post

Splunk Employee
Splunk Employee

Great stuff.
Little typo:
Last command should be

 | stats sparkline count last(peakCount) as peakCount last(peakTime) as peakTime by user

Thx,
Holger

SplunkTrust
SplunkTrust

Thanks very much for spotting that. I fixed the typo inline. So maybe on balance once you read this we should delete these little comments of ours. 😃

0 Karma