Dashboards & Visualizations

How to set up alerts when current week data is more than avg of last 4 weeks data ?

rahul_n
Explorer

Hi All. 

I want to check if there is any means by which I can set up alerts if  the current week's data is more than the avg of last 4 week's data.

 

I have around 25 customers hitting 3 APIs. I want to compare if first customer has hit the first API more in the current week when compared to the avg number of hits in the previous 4 week's, and then send an alert if it exceeds the avg. 
  Similarly for all the customers and the 3 APIs. 

All the above operation should happen with a single Splunk query. I don't want to write 25*3=75 queries for the alerts. 

I have written a query for 1 customer and 1 API

index=nameofindex ns=namespace process="end" method!=GET earliest=-30d@d   customer1
| search API="API1"
| timechart span=1w count
| timewrap w series=short
| eval mean=(s1+s2+s3+s4)/4
| where s0 > mean


Can anyone please help here ? Any help is greatly appreciated. Thanks!

Labels (5)
0 Karma
1 Solution

ITWhisperer
SplunkTrust
SplunkTrust

You need to snap latest to the start of the day so you don't get today's events included and the comparisons should have been >= not just >

ndex=indexname ns=namespace earliest=-35d@d latest=@d
| search API= API1 AND Customer= Customer1
| bin _time span=1w
| stats count by _time Customer API
| eval previous = if(_time >= relative_time(now(), "-1w@w"), null, count)
| eventstats avg(previous) as average by Customer API
| where _time >= relative_time(now(), "-1w@w")
| fields - previous
| where count > average

The count is for 7 days starting from the date in the _time column

View solution in original post

ITWhisperer
SplunkTrust
SplunkTrust
| gentimes start=-35 increment=1d
| rename starttime as _time 
| fields _time
| eval customer=split("customer1,customer2,customer3",",")
| mvexpand customer
| eval api=split("API1,API2,API3",",")
| mvexpand api
| eval count=random() % 20


| bin _time span=1w
| stats sum(count) as count by _time customer api
| eval previous=if(_time > relative_time(now(),"-1w@w"),null,count)
| eventstats avg(previous) as average by customer api
| where _time > relative_time(now(),"-1w@w")
| fields - previous

rahul_n
Explorer

Hi @ITWhisperer ,

Thank you for the quick response. 

I have few questions on the query.

1. I need to pass an index, a namespace and few other parameters (like, index=indexname ns=namespace process="end" method!=GET pod_name=podname  region=regionname) before passing the actual query. 

By using the gentimes command, its not allowing me to pass the basic query as mentioned above. 

2. Also, if I use the split() command with customer names or API names in it, it is  not working , and the similar goes with split() command with api names.  ( For reference: We have the fields Customer and API in the logs )

4. The count must be the number of times a customer has accessed an API

     For reference, I use 

     "<basic splunk query> | timechart count by API"   to get the the timechart showing the count. (We have the field API in the logs, which makes it easy)

Will that be possible for you to modify the query based on the above conditions ? 

I am sorry for asking too much, but I tried multiple ways and left with no results

Thank you very much in advance

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

The example was a run-anywhere example to show the sort of thing to try. In your case, you just need the bit after the blank lines.

<basic splunk query>
| bin _time span=1w
| stats count by _time Customer API
| eval previous=if(_time > relative_time(now(),"-1w@w"),null,count)
| eventstats avg(previous) as average by Customer API
| where _time > relative_time(now(),"-1w@w")
| fields - previous

You should probably use an earliest time of -35d@d as well to get complete weeks.

rahul_n
Explorer

Hi @ITWhisperer ,

By using the given query, which is 

 

index=indexname ns=namespace earliest=-35d@d

| search API= API1 AND Customer= Customer1

| bin _time span=1w

| stats count by _time Customer API

| eval previous = if(_time > relative_time(now(), "-1w@w"), null, count)

| eventstats avg(previous) as average by Customer API

| where _time > relative_time(now(), "-1w@w")

| fields - previous

 

The table which I got for one customer and one API is 

_timeCustomerAPIcountaverage
2021-04-17Customer1API1839366348.25
2021-04-24Customer1API18666374

 

Here, does the "count" refer to the count of requests for the week starting with the date in first column ? or the end date ? 

Does the _time refer to the start day of the week or the end day of the week ? 

Can we change the query so that we get the results which have the current week's count is greater than the average of last 4 week's count ? (So that I can create an alert which sends the list of customers sending more requests in current week )

 

Thank you for the help and any help in advance

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

You need to snap latest to the start of the day so you don't get today's events included and the comparisons should have been >= not just >

ndex=indexname ns=namespace earliest=-35d@d latest=@d
| search API= API1 AND Customer= Customer1
| bin _time span=1w
| stats count by _time Customer API
| eval previous = if(_time >= relative_time(now(), "-1w@w"), null, count)
| eventstats avg(previous) as average by Customer API
| where _time >= relative_time(now(), "-1w@w")
| fields - previous
| where count > average

The count is for 7 days starting from the date in the _time column

rahul_n
Explorer

 @ITWhisperer , Thank you

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...