Splunk Search

Single value panel with trend based on avg(count)

marco_carolo
Path Finder

Hello there.

What I'm trying to do is the following:

 

search | bucket span=60s _time | stats count by _time | ...

 

I want to achive if possible the following:

 

  1. Calculate the average per minute of count of search (if I concatenate the stats avg(count) I get the actual value) but I can't:
  2. Have the Single Value panel inside my dashboard to correctly display the trend based on average values.

Is there any way to achive this result?

 

At the moment each try I do to compare those values is not going well 😞

 

 

 

Labels (5)
0 Karma
1 Solution

PickleRick
SplunkTrust
SplunkTrust

Bucketing is usually not needed if you just need to do a timeseries calculation. There's a separate command for this - timechart.

And I'd approach this by "adjusting" time...

<<your search>> | timechart count span=1m 
| eventstats max(_time) as maxtime
| eval _time=if(_time=maxtime,maxtime,maxtime-60)
| stats avg(count) by _time

This way you get a value for the first minute and the average value per minute for the remainder of your search period. And now you can use the single value visualisation with a trend comparison to a value "a minute before".

EDIT: As this was marked as solution - please see my other solution further down the thread because this one was based on misunderstanding on what the OP really needed.

View solution in original post

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Bucketing is usually not needed if you just need to do a timeseries calculation. There's a separate command for this - timechart.

And I'd approach this by "adjusting" time...

<<your search>> | timechart count span=1m 
| eventstats max(_time) as maxtime
| eval _time=if(_time=maxtime,maxtime,maxtime-60)
| stats avg(count) by _time

This way you get a value for the first minute and the average value per minute for the remainder of your search period. And now you can use the single value visualisation with a trend comparison to a value "a minute before".

EDIT: As this was marked as solution - please see my other solution further down the thread because this one was based on misunderstanding on what the OP really needed.

0 Karma

marco_carolo
Path Finder

@PickleRick 

 

it seems that the value of avg is referred to the actual minute, and not to the average of all the minutes in the 10m timespan...

 

I've to take 10 values (one count for each minute in the last 10 minutes)

Make the average of those count

Have the treshold displaying if this average is increasing or decreasing

0 Karma

PickleRick
SplunkTrust
SplunkTrust

I don't understand 🙂

I thought you wanted a value of count of events from the last minute and the average per-minute value from some previous minutes.

That's what this does. Firstly it calculates per-minute statistics with |timechart.

Then it adds a field containing a timestamp of the latest minute (so we can differentiate between the latest minute and the previous ones).

Then it rewrites the timestamp for the remainding minutes so they can be aggregated with stats.

And finally | stats avg(count) gives you two values - one is an "average" value from a single value from last minute and the other is an average calculated from previous minutes.

Is that not what you wanted?

 

marco_carolo
Path Finder

The request I got is to calculate the average calls to a specific function per minute, in a 10 minute window.

What my team leader expects is a single value. In the last 10 minute, this function is called averagely 400 times per minute.

What I'm doing is setting the bucket, calling the count, and then calling the average over the got values.

What I want to add is the trend and  the sparkiline in a single counter panel inside my dashboard, so I can see if the average value is rising or decreasing.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Ahhh... I thought you wanted to have a value from last minute compared to an average of previous values.

But you simply want a moving average over a sliding window.

Just use the timechart (as I wrote before - it's a dedicated command for analysing time series so there's no need to fiddle manually with buckets) and do a streamstats.

<<your_search>> | timechart count span=1m | streamstats window=10 avg(count)

Alternatively you can use |trendline

<<your_search>> | timechart count span=1m |  trendline sma10(count)

Remember than with time aligned to full minutes you might want to set your search time range to full minutes otherwise you'll get "not-full" values at the ends of the range since they correspond to only fractions of a minute.

marco_carolo
Path Finder

@PickleRick

Can you be more specific about how to get full values instead of partial for each minute?

I need to achive that...

0 Karma

PickleRick
SplunkTrust
SplunkTrust

If you do at - let's say - 10:35:15 a |timechart with span=1m over last 5 minutes, you'll get data from 10:30:15 to 10:35:15 split into a minute-long buckets, aligned at full minutes. So you'll get 6 buckets - one containing events from 10:30:15 to :10:31:00, another one from 10:31:00 to 10:32:00 and so on up to 10:35:00-10:35:15.

Obviously, the first and last buckets will be smaller than the rest of them.

That's the default behaviour of the |timechart

You can however use partial=f option for |timechart wich will omit the not-full first and last buckets from the result. In our example case you'd get only four buckets - 10:31-10:32, 10:32-10:33, 10:33-10:34 and 10:34-10:35.

marco_carolo
Path Finder

@PickleRick 

 

This search is giving me strange results... I expect result approx around 400 and I'm getting 12-16 instead...

 

 

0 Karma

marco_carolo
Path Finder

Ciao Giuseppe 🙂

 

What I need to do, or better, what I was asked to do is this:

Get the avarage count of calls per minute in 10 minutes.

What I wanted to do is having the single value panel displaying the value of the current average per minute, plus the trend of the previous avarages per minute in 10 minutes timespan, so I can see if the value is increasing or decreasing.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @marco_carolo,

you can use my search with a different timespan

index=your_index
| bucket span=1m _time 
| stats count by _time
| bucket span=10m _time 
| stats avg(count) AS avg BY _time
| sort -_time
| head 2
| reverse

Ciao.

Giuseppe

0 Karma

marco_carolo
Path Finder

It seems that I'm having different results:

 

search | bucket span=1m _time | stats count by _time | stats avg(count) is resulting now 386 (which seems to be correct, taken the average of all the value got by the count)

The result of your query is resulting 269...

 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @marco_carolo,

only for debugging, fix a past period of analysis: e.g. from -80 mites to -60 minutes, otherwise the results are continously changing!

Then, probably it's an error in the message, but, you need to use two times bucket, one before each stats.

Ciao.

Giuseppe

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @marco_carolo,

let me understand your need:

you want to display a value (average for minute of count of events) and you want e.g. the average of the last hour and the trend respect the previous hour , is it correct?

If this is your need, try something like this:

index=your_index
| bucket span=1m _time 
| stats count by _time
| bucket span=1h _time 
| stats avg(count) AS avg BY _time
| sort -_time
| head 2
| reverse

as test you could see this:

index=_internal
| head 1000000
| bucket span=1m _time
| stats count BY _time
| bucket span=1h _time
| stats avg(count) AS avg BY _time
| sort -_time
| head 2
| reverse

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...