Good use case! But, to make sure I understand, you have a number of eventtypes (and applications) you want to baseline and compute the difference between the current rate and the baseline rate to see if the system is no longer behaving normally.
For example, for eventtype="foo" if you have a count of 250 in the last 4 minutes, how does this compare to average counts of eventtype="foo" on the previous 4 Mondays at the same period (??)
A challenge with these comparisons can be the number of false positives and false negatives that can result, because sometimes a simple average and % deviations are often not sufficient to model the data accurately. Apologies for the statistical terminology, but an ideal baseline should be fit to a probability distribution function that accurately models the data. Different kinds of data may require different probability distribution functions. And if you are trying to do this across multiple data types/dimensions, it can be difficult implement as you'll need to store these multiple baselines across a large number of dimensions.
My company has developed an analytics app that will be able to accomplish this very thing and make it simple to use.
Here's a link to our app: http://splunk-base.splunk.com/apps/68765/prelert-anomaly-detective
... View more