Reporting

How do you calculate the mean of a timewrap series?

mmdacutanan
Explorer

I am trying to write a query that will count the number of errors for the last 5 minutes and then I want to compare if that error total is greater than the average for the previous 15 minutes. Then I want to trigger an alert if series s0 is greater than the mean (for alerting purposes).

I found part of the solution from Splunk Answers (https://answers.splunk.com/answers/151921/how-to-set-up-alert-when-error-count-of-latest-week-is-gre...), but my eval expression to calculate the mean field does not seem to work; the field is just empty. What am I not doing right here?

Query is below:

index=cisco_uc sourcetype=RCD| where like (Variable10,"Tx|%")
| bucket _time span=5m
| stats latest(Variable10) as Variable10 by _time Variable2
|timechart span=5m count(eval(like(Variable10,"%|U%"))) as U_Count
|timewrap 5min series=short
|eval mean=(U_Count_s1 + U_Count_s2 + U_Count_s3)/3
|where U_Count_s0 > mean
Tags (1)
0 Karma
1 Solution

skoelpin
SplunkTrust
SplunkTrust

Try like this. I tested against an ITSI summary index, so replace line 1 with your query and replace those _raw values with your field names. Also, you may not want to count the latest 5 minute bucket since it's still accepting new data and the counts may be artificially low. In the example below, I'm comparing the latest full 5 minute bucket count with the average of 3 - 15 minute buckets preceding it. It runs an hour timerange in under 1 second!

index=itsi_summary 
| bin _time span=5m
| stats count(_raw) AS _raw by _time
| streamstats sum(_raw) AS fifteen_min_window time_window=15m 
| streamstats sum(_raw) AS five_min_window time_window=5m
| streamstats count AS primary_key
| eval five_min_value=if(primary_key=2,'five_min_window',"")
| eval fifteen_min_value=if(primary_key=4,'fifteen_min_window',"")
| eval average=fifteen_min_value/3
| stats max(five_min_value) AS five_min_count max(average) AS avg_count_fifteen_min
| eval compare=if('average'>'five_min_value',"Avg is greater than latest bucket","Latest bucket is greater Than Average")

View solution in original post

0 Karma

skoelpin
SplunkTrust
SplunkTrust

@mmdacutanan did this solve your problem? If so, can you accept an answer to close out the question?

0 Karma

skoelpin
SplunkTrust
SplunkTrust

Try like this. I tested against an ITSI summary index, so replace line 1 with your query and replace those _raw values with your field names. Also, you may not want to count the latest 5 minute bucket since it's still accepting new data and the counts may be artificially low. In the example below, I'm comparing the latest full 5 minute bucket count with the average of 3 - 15 minute buckets preceding it. It runs an hour timerange in under 1 second!

index=itsi_summary 
| bin _time span=5m
| stats count(_raw) AS _raw by _time
| streamstats sum(_raw) AS fifteen_min_window time_window=15m 
| streamstats sum(_raw) AS five_min_window time_window=5m
| streamstats count AS primary_key
| eval five_min_value=if(primary_key=2,'five_min_window',"")
| eval fifteen_min_value=if(primary_key=4,'fifteen_min_window',"")
| eval average=fifteen_min_value/3
| stats max(five_min_value) AS five_min_count max(average) AS avg_count_fifteen_min
| eval compare=if('average'>'five_min_value',"Avg is greater than latest bucket","Latest bucket is greater Than Average")
0 Karma

mmdacutanan
Explorer

Hi skoelpin! Thank you so much! I think this gets me close enough to what I need. 🙂 Btw, Are you just doing the last 20 minutes?

0 Karma

mmdacutanan
Explorer

Hello again skoelpin! Thank you so much again for the query you provided. That gave me idea on how to come up with my own query. I don't use streamstats often but I will now! 🙂 Here is my final query. I added some stdev calculations. I don't want to get too many false alerts so I thought of calculating upper and lower limits of the standard deviation:

index=uc sourcetype=rcd| where like (Variable10,"Tx|%|NS|%")
| bucket _time span=5m
| stats latest(Variable10) as Variable10 by _time Variable2
| timechart span=5m count(Variable10) as NS_Count
| streamstats count as pri_key
| streamstats avg(NS_Count) as avg, stdev(NS_Count) as stdev window=3
| eval avg=round(avg,2)
| eval stdev=round(stdev,2)
| eval lowerBound=(avg-stdev*2)
| eval upperBound=(avg+stdev*2)
| eval 5m_value=if(pri_key=4,'NS_Count',"")
| eval 15m_prev_upperBound=if(pri_key=3,'upperBound',"")
| eval 15m_prev_lowerBound=if(pri_key=3,'lowerBound',"")
| eval 15m_prev_avg=if(pri_key=3,'avg',"")
| eval 15m_prev_stdev=if(pri_key=3,'stdev',"")
| stats values(5m_value) as 5m_value values(15m_prev_upperBound) as 15m_prev_upperBound values(15m_prev_lowerBound) as 15m_prev_lowerBound values(15m_prev_avg) as 15m_prev_avg values(15m_prev_stdev) as 15m_prev_stdev
| eval boolean=if('5m_value'>'15m_prev_upperBound',"1","0")

0 Karma

HiroshiSatoh
Champion

Try this!

|timechart span=5m count(eval(like(Variable10,"%|U%"))) as U_Count
↓
|timechart span=5m count(eval(if(like(Variable10,"%|U%"),1,0))) as U_Count
0 Karma

mmdacutanan
Explorer

Thanks HiroshiSatoh! skoelpin's query actually brought me closer to the solution. I posted my final query above.

0 Karma
Get Updates on the Splunk Community!

How to Monitor Google Kubernetes Engine (GKE)

We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about ...

Index This | How can you make 45 using only 4?

October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...

Splunk Education Goes to Washington | Splunk GovSummit 2024

If you’re in the Washington, D.C. area, this is your opportunity to take your career and Splunk skills to the ...