- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
dang
Path Finder
03-10-2011
09:22 PM
I am attempting to calculate a running average with autoregress for a count of errors across a group of servers. I'm using the following query to get the data in 5-minute slices
index="monitoring" ServerErrors | timechart span=5m sum(ServerErrors)
How would I get a running average of the last four hours of the values generated here? Do I want to use something like
| autogregress p1-48
My experience here is very limited, so I'm certain there is much I don't know about what's going on here.
1 Solution
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
David

Splunk Employee
03-10-2011
10:05 PM
I'd go this route:
index="monitoring" ServerErrors
| timechart span=5m sum(ServerErrors) as Error5MinSum
| streamstats avg(Error5MinSum) window=48
http://www.splunk.com/base/Documentation/latest/SearchReference/Streamstats
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
David

Splunk Employee
03-10-2011
10:05 PM
I'd go this route:
index="monitoring" ServerErrors
| timechart span=5m sum(ServerErrors) as Error5MinSum
| streamstats avg(Error5MinSum) window=48
http://www.splunk.com/base/Documentation/latest/SearchReference/Streamstats
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
dang
Path Finder
03-10-2011
11:49 PM
Thanks. This provided the kind of information I wanted.
