Getting Data In

Calculate percentage increase/decrease of indexing volume compared to average indexing volume

isha_rastogi
Path Finder

I want to trigger an alert if there is 50% increase/decrease of today's indexing volume versus average indexing volume of last 7 days.
I've written below query but last 7 days have 60 indexes and today's have 57 indexes and by the query I am getting only 57 indexes. I want to write 0 in today's index so that alert would trigger for no data in that index. and able to calculate the percentage change

index=internal source=*metrics.log group=per_index_thruput earliest=-7d latest=now NOT (series=* OR series=summary) | bucket time span=1d | stats sum(kb) as total by series,_time | stats avg(total) as average by series | eval averageMB=round(average/1024,2) | table series averageMB | join type=left series [ search index=_internal source=*metrics.log group=per_index_thruput earliest=@d latest=now NOT (series=* OR series=summary)| bucket _time span=1h | stats sum(kb) as total1 by series,_time | stats avg(total1) as average1 by series | eval averageMBD=round(average1/1024,2) | table series averageMBD] | table series averageMB averageMBD

0 Karma
1 Solution

somesoni2
SplunkTrust
SplunkTrust

Give this a try

index=_internal source=*license_usage.log type=Usage earliest=-7d@d latest=now 
| bucket span=1d _time | stats sum(b) as usage by _time idx 
| eval period=if(_time<relative_time(now(),"@d"),"averageMB","averageMBD") 
| chart avg(usage) over idx by period
| eval averageMB=round(averageMB/1024/1024,2) | eval averageMBD=round(averageMBD/1024/1024,2)
| rename idx as series

View solution in original post

0 Karma

isha_rastogi
Path Finder

Can I use something like below to get percentage change?

index=internal source=*metrics.log group=per_index_thruput earliest=-7d latest=now()
NOT (series=
* OR series=summary)
| bucket time span=1d
| stats sum(kb) as totalDay by series,_time
| stats avg(totalDay) as avgTotal by series
| eval avgTotalMB=round(avgTotal/1024,2)
| table series avgTotalMB | join type=outer series

[ search index=_internal source=*metrics.log group=per_index_thruput
earliest=@d latest=now NOT (series=
* OR series=summary)
| bucket _time span=1h | stats sum(kb) as totalHour by series,_time
| stats avg(totalHour) as avgHour by series
| eval todaysCalcMBD=round(24*avgHour/1024,2)
| table series todaysCalcMBD
]| eval perc_change = (todaysCalcMBD-avgTotalMB)/avgTotalMB| eval perc = perc_change * 100| table series avgTotalMB todaysCalcMBD perc | fillnull value=0

0 Karma

DalJeanis
SplunkTrust
SplunkTrust

Here's one way...

index=internal source=*metrics.log group=per_index_thruput earliest=-7d latest=now 
NOT (series= OR series=summary) 
| bucket _time span=1d 
| stats sum(kb) as totalDay by series,_time 
| stats avg(totalDay) as avgTotal by series 
| eval avgTotalMB=round(avgTotal/1024,2) 
| table series avgTotalMB
| rename COMMENT as "Above, we get your weekly averages."

| rename COMMENT as "We use appendpipe so we can grab the series names."
| appendpipe 
  [| rename COMMENT as "First, we create one zero detail record for each hour for each series"
   | eval hours=mvrange(relative_time(now(),"@d"),now(),3600) 
   | mvexpand hours 
   | eval _time=hours 
   | eval kb=0
   | table _time series kb 

   | rename COMMENT as "Next, we add the real detail, but don't sum it yet."
   | append  
     [ search index=internal source=*metrics.log group=per_index_thruput 
       earliest=@d latest=now NOT (series= OR series=summary)
       | bucket _time span=1h] 
   | stats sum(kb) as totalHour by series,_time 
   | stats avg(totalHour) as avgHour by series 
   | eval todaysCalcMBD=round(24*avgHour/1024,2) 
   | table series todaysCalcMBD
   ] 

| rename COMMENT as "Finally, we roll them together."
| stats values(*) as * by series
| table series avgTotalMB todaysCalcMBD

This is air code, but it should be close. I've modified the variable names to make it clear what they are, and to avoid problematic terms. Best practices are to avoid common words and reserved words like "average" as variable names.

0 Karma

somesoni2
SplunkTrust
SplunkTrust

Give this a try

index=_internal source=*license_usage.log type=Usage earliest=-7d@d latest=now 
| bucket span=1d _time | stats sum(b) as usage by _time idx 
| eval period=if(_time<relative_time(now(),"@d"),"averageMB","averageMBD") 
| chart avg(usage) over idx by period
| eval averageMB=round(averageMB/1024/1024,2) | eval averageMBD=round(averageMBD/1024/1024,2)
| rename idx as series
0 Karma

isha_rastogi
Path Finder

@somesoni2 - How can I use relative_time to consider average only yesterday's data and earliest and latest =-7d@d and latest=-2d@d

0 Karma

isha_rastogi
Path Finder

@somesoni2 - What if I have to consider the average till yesterday. Because if i'll take now it will consider today's data which will give incorrect metrics
Can you help me with the earliest and latest of average and today's
For one day I am considering earliest=-24h latest=now which will give me indexing volume for last 24 hours and for average(comparison metrics) it should not include that 24 hours. Not able to put earliesr latest for average and one day
index=_internal source=*license_usage.log type=Usage earliest=-7d@d latest=-24h
| bucket span=1d _time | stats sum(b) as usage by _time idx
| eval period=if(_time

0 Karma

DalJeanis
SplunkTrust
SplunkTrust

I'd like to see a scaling factor in there for today's usage, something like

index=_internal source=*license_usage.log type=Usage earliest=-7d@d latest=now (idx=ciscoise OR idx=cyberark_logs OR idx=netscaler)
 | bucket span=1d _time 
 | stats sum(b) as usage by _time idx 
 | eval period=if(_time<relative_time(now(),"@d"),"averageMB",mvappend("todayMBActual","todayMBProjected"))
 | mvexpand period  
 | chart avg(usage) over idx by period
 | eval averageMB=round(averageMB/1024/1024,2) 
 | eval todayMBActual=round(todayMBActual/1024/1024,2)
 | eval tempscale=86400 / (now()-relative_time(now(),"@d"))
 | eval todayMBProjected=round(tempscale*todayMBProjected/1024/1024,2)
 | fields - tempscale
 | rename idx as series
0 Karma

isha_rastogi
Path Finder

Can you please explain the query little more, "tempscale ".

0 Karma

niketn
Legend

@isha_rastogi please re-post the query with Code button 101010 so that query does not get filtered.

____________________________________________
| makeresults | eval message= "Happy Splunking!!!"
0 Karma

isha_rastogi
Path Finder

101010 instead of internal it's underscore internal and series!=underscore star series!=*

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...