topic Re: How would I compare the average daily results from the previous time period to today? in Alerting
https://community.splunk.com/t5/Alerting/How-would-I-compare-the-average-daily-results-from-the-previous/m-p/156299#M2561
<P>One way, but perhaps not the best, is by using a subsearch and <CODE>eval</CODE>. Here's an example:</P>
<PRE><CODE>earliest=@d sourcetype=access_combined
| eval
[ search earliest=-1d@d latest=@d sourcetype=access_combined
| stats avg(bytes) as avg_bytes
| return avg_bytes
]
| table _time, avg_bytes, bytes
</CODE></PRE>
<P>In this example, the <CODE>eval</CODE> command looks a little strange. But, remember, subsearches are a textual construct. So, by the time the subsearch finishes, the search command inside of <CODE>[</CODE> and <CODE>]</CODE> will be textually replaced by the results of the subsearch - in this case <CODE>avg_bytes=<some_number></CODE>. This happens before the <CODE>eval</CODE> even "sees it" - all <CODE>eval</CODE> "sees" is <CODE>| eval avg_bytes=1234567</CODE></P>
<P>This is probably not the best performing way of solving this problem and could be improved a by a summary index or an acceleration of the subsearch. </P>Thu, 08 May 2014 01:22:32 GMTdwaddle2014-05-08T01:22:32ZHow would I compare the average daily results from the previous time period to today?
https://community.splunk.com/t5/Alerting/How-would-I-compare-the-average-daily-results-from-the-previous/m-p/156298#M2560
<P>I'm trying to create some monitoring alerts for when errors increase greater than a certain amount compared to their usual amount. I've got it working to compare yesterday to today, but I'd like to compare the daily average of a certain period to today for more accurate results. This is proving to be a little too tricky for me and any help would be greatly appreciated! Here is my current search:</P>
<PRE>index="reseller" sourcetype="oneclick_error_log" Sitename="*" | bucket _time span="d" | stats count AS oneclick_errors by Sitename, _time | delta oneclick_errors as change | eval change_percent=change/(oneclick_errors-change)*100 | sort Sitename | where _time>=relative_time(now(),"-d") AND change_percent > 25</PRE>
<P>Note: I have the alert set to run at midnight so there is a complete dataset for comparison.</P>Wed, 07 May 2014 22:11:18 GMThttps://community.splunk.com/t5/Alerting/How-would-I-compare-the-average-daily-results-from-the-previous/m-p/156298#M2560daviduslan2014-05-07T22:11:18ZRe: How would I compare the average daily results from the previous time period to today?
https://community.splunk.com/t5/Alerting/How-would-I-compare-the-average-daily-results-from-the-previous/m-p/156299#M2561
<P>One way, but perhaps not the best, is by using a subsearch and <CODE>eval</CODE>. Here's an example:</P>
<PRE><CODE>earliest=@d sourcetype=access_combined
| eval
[ search earliest=-1d@d latest=@d sourcetype=access_combined
| stats avg(bytes) as avg_bytes
| return avg_bytes
]
| table _time, avg_bytes, bytes
</CODE></PRE>
<P>In this example, the <CODE>eval</CODE> command looks a little strange. But, remember, subsearches are a textual construct. So, by the time the subsearch finishes, the search command inside of <CODE>[</CODE> and <CODE>]</CODE> will be textually replaced by the results of the subsearch - in this case <CODE>avg_bytes=<some_number></CODE>. This happens before the <CODE>eval</CODE> even "sees it" - all <CODE>eval</CODE> "sees" is <CODE>| eval avg_bytes=1234567</CODE></P>
<P>This is probably not the best performing way of solving this problem and could be improved a by a summary index or an acceleration of the subsearch. </P>Thu, 08 May 2014 01:22:32 GMThttps://community.splunk.com/t5/Alerting/How-would-I-compare-the-average-daily-results-from-the-previous/m-p/156299#M2561dwaddle2014-05-08T01:22:32ZRe: How would I compare the average daily results from the previous time period to today?
https://community.splunk.com/t5/Alerting/How-would-I-compare-the-average-daily-results-from-the-previous/m-p/156300#M2562
<P>Turns out I'm working with way too much data for subsearches. I came up with a different solution that I'll post in a separate comment.</P>Fri, 09 May 2014 00:01:35 GMThttps://community.splunk.com/t5/Alerting/How-would-I-compare-the-average-daily-results-from-the-previous/m-p/156300#M2562daviduslan2014-05-09T00:01:35ZRe: How would I compare the average daily results from the previous time period to today?
https://community.splunk.com/t5/Alerting/How-would-I-compare-the-average-daily-results-from-the-previous/m-p/156301#M2563
<P>This is the search I ended up using to solve my problem (there was too much data to use a subsearch/eval). The change_percent still needs some work, but I've got my data how I want it.</P>
<PRE>index="reseller" sourcetype="oneclick_error_log" Sitename="*" earliest=-8d
| stats count as weekly_total_errors, count(eval(if( _time>relative_time(now(),"-d"),"x",NULL))) as todays_errors by Sitename
| eval weekly_total_errors = weekly_total_errors - todays_errors
| eval weekly_avg = weekly_total_errors/7
| eval change = todays_errors-weekly_avg
| eval change_percent = (change/weekly_avg)*100</PRE>Fri, 09 May 2014 00:03:38 GMThttps://community.splunk.com/t5/Alerting/How-would-I-compare-the-average-daily-results-from-the-previous/m-p/156301#M2563daviduslan2014-05-09T00:03:38ZRe: How would I compare the average daily results from the previous time period to today?
https://community.splunk.com/t5/Alerting/How-would-I-compare-the-average-daily-results-from-the-previous/m-p/156302#M2564
<P>There is also an app that allows you to do this easily.</P>
<P><A href="http://apps.splunk.com/app/1645">http://apps.splunk.com/app/1645</A></P>Fri, 09 May 2014 01:53:53 GMThttps://community.splunk.com/t5/Alerting/How-would-I-compare-the-average-daily-results-from-the-previous/m-p/156302#M2564Lucas_K2014-05-09T01:53:53ZRe: How would I compare the average daily results from the previous time period to today?
https://community.splunk.com/t5/Alerting/How-would-I-compare-the-average-daily-results-from-the-previous/m-p/156303#M2565
<P>Another option</P>
<PRE><CODE>|multisearch [search index="reseller" sourcetype="oneclick_error_log" Sitename="*" earliest=-8d@d latest=@d | eval type="weeklyAvg" ][search index="reseller" sourcetype="oneclick_error_log" Sitename="*" earliest=@d | eval type="today" ] |bucket span=1d _time| chart count over loggingAppId by type | eval weeklyAvg=round(weeklyAvg/7,2) | eval change_percent=round((today-weeklyAvg)*100/weeklyAvg,2) | where change_percent> 25
</CODE></PRE>Fri, 09 May 2014 14:40:25 GMThttps://community.splunk.com/t5/Alerting/How-would-I-compare-the-average-daily-results-from-the-previous/m-p/156303#M2565somesoni22014-05-09T14:40:25ZRe: How would I compare the average daily results from the previous time period to today?
https://community.splunk.com/t5/Alerting/How-would-I-compare-the-average-daily-results-from-the-previous/m-p/156304#M2566
<P>Unfortunately the limit on data subsearches can process makes this a solution I can't use. Thanks so much for the response though!</P>Fri, 09 May 2014 16:22:52 GMThttps://community.splunk.com/t5/Alerting/How-would-I-compare-the-average-daily-results-from-the-previous/m-p/156304#M2566daviduslan2014-05-09T16:22:52ZRe: How would I compare the average daily results from the previous time period to today?
https://community.splunk.com/t5/Alerting/How-would-I-compare-the-average-daily-results-from-the-previous/m-p/156305#M2567
<P>I've played with timewrap - it's super awesome and powerful, but I couldn't figure how to configure it to compare larger windows of time to smaller windows (last week vs today). If I wanted to compare the results today to the same day last week, it would be perfect.</P>Fri, 09 May 2014 16:39:26 GMThttps://community.splunk.com/t5/Alerting/How-would-I-compare-the-average-daily-results-from-the-previous/m-p/156305#M2567daviduslan2014-05-09T16:39:26ZRe: How would I compare the average daily results from the previous time period to today?
https://community.splunk.com/t5/Alerting/How-would-I-compare-the-average-daily-results-from-the-previous/m-p/156306#M2568
<P>How do you refine the WHERE clause so that it not only looks for "change_percent > 25" but also "weeklyAvg > 100" for example? I've tried "where change_percent > 25 and weeklyAvg > 100" in my example but what happens is that during the first parsing phase, I see the results of the query (before the WHERE statement) being populated in the table from the stats command. But as soon as it gets to the WHERE statement, the long list of entries gets reduced to just a few (where a lot more is clearly expected).</P>Tue, 29 Sep 2020 16:12:00 GMThttps://community.splunk.com/t5/Alerting/How-would-I-compare-the-average-daily-results-from-the-previous/m-p/156306#M2568fshimaya2020-09-29T16:12:00Z