Splunk Search

How to include no event as 0 in avg calculation?

Dantuzzo
Loves-to-Learn Lots

Hi,

i'm struggling in calculating hourly or daily average and displaying the results if there's no events at all, which in theory should be calculated as 0 and included in the avg calculation. Currently my query calculates the avg in a given timespan only if there are events for a specific severity, if not it remains blank and is not included in the avg calculation, query, results and expected results:    

 

index=myindex earliest=-7d@d latest=now()
    | bin _time span=1h
    | fields _time, severity
    | stats count by _time, severity
    | eval previous=if(_time<relative_time(now(),"@d"),count,null())
    | eventstats avg(previous) as average by severity
    | eval change_percent=round(((count-average)*100)/count,0)."%"
    | table _time severity count average change_percent

 

_timeseveritycountaveragechange_percent
2022-12-16 10:00High22.25-12%
2022-12-16 12:00Low220%
2022-12-16 14:00Medium3233%

 

 

i'd like to show something like this:

_timeseveritycountaveragechange_percent
2022-12-16 10:00-11:00High20.5x%
2022-12-16 10:00-11:00Medium01-x%
2022-12-16 10:00-11:00 Low01x%
2022-12-16 11:00-12:00High00.5 x%
2022-12-16 11:00-12:00Medium01 x%
2022-12-16 11:00-12:00Low01 x%
2022-12-16 12:00-13:00High00.5 x%
2022-12-16 12:00-13:00Medium01 x%
2022-12-16 12:00-13:00Low21x%

 

thank you for any help 

Labels (4)
0 Karma

yuanliu
SplunkTrust
SplunkTrust

Also using eventstats, I have a more literal filler method:

index=myindex earliest=-7d@d latest=now()
| bin _time span=1h
| fields _time, severity
| addinfo ``` so we know how many hours are in search period ```
| eval filler = mvrange(0, round((info_max_time - info_min_time) / 3600))
| stats count values(filler) as filler by _time, severity
| appendpipe
    [ eval count = 0
    | mvexpand filler ``` account for every hour ```
    | eval _time = relative_time(now(), "-" . tostring(filler) . "h@h")
    | bin span=1h _time]
| stats sum(count) as count by _time, severity ``` count + 0 == count ```
| eval previous=if(_time<relative_time(now(),"@d"),count,null())
| eventstats avg(previous) as average by severity
| eval change_percent=round(((count-average)*100)/count,0)."%"
0 Karma

richgalloway
SplunkTrust
SplunkTrust

Have you met appendpipe?  The command defines a subpipeline that runs against the current results and often is used to insert values when there are no results.  In this case, we use three calls to appendpipe because we have three values to test for and insert if missing.

index=_internal earliest=-7d@d latest=now()
| bin _time span=1h 
| fields _time, log_level 
| stats count by _time, log_level 
| appendpipe 
    [ stats values(log_level) as levels by _time 
    | eval log_level=if(isnull(mvfind(levels,"ERROR")),"ERROR", null()),count=0 
    | where isnull(mvfind(levels,"ERROR")) 
    | fields - levels ] 
| appendpipe 
    [ stats values(log_level) as levels by _time 
    | eval log_level=if(isnull(mvfind(levels,"WARN")),"WARN", null()),count=0 
    | where isnull(mvfind(levels,"WARN")) 
    | fields - levels ] 
| appendpipe 
    [ stats values(log_level) as levels by _time 
    | eval log_level=if(isnull(mvfind(levels,"INFO")),"INFO", null()),count=0 
    | where isnull(mvfind(levels,"INFO")) 
    | fields - levels ] 
| sort + _time 
| eval previous=if(_time<relative_time(now(),"@d"),count,null()) 
| eventstats avg(previous) as average by log_level 
| eval change_percent=round(((count-average)*100)/count,0)."%" 
| table _time log_level count average change_percent

 

---
If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

New Dates, New City: Save the Date for .conf25!

Wake up, babe! New .conf25 dates AND location just dropped!! That's right, this year, .conf25 is taking place ...

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud  In today’s fast-paced digital ...

Observability protocols to know about

Observability protocols define the specifications or formats for collecting, encoding, transporting, and ...