I construct a csv (output)lookup file containing the hourly average response time, the hourly number of events and the service concerned.
This file is updated daily (scheduled append).
| bin _time span=1h
| stats avg(responseTimeMilliseconds) as avgResponseTimeMilliseconds count(responseTimeMilliseconds) as numberOfEvents by _time Service
| table _time, Service,avgResponseTimeMilliseconds, numberOfEvents
| outputlookup hourlyaverage.csv append=true
This results in file containing the following columns: _time | Service | avgResponseTimeMilliseconds | numberOfEvents
This "hourly" file is used for a certain audience.
Still another audience requests a report every quarter of the daily average response time.
Since I have the first file, I would like to avoid the generation of a second file as the daily average can be computed based on the hourly average and the number of events in each hour. daily average=Sum(hourly average*hourly events)/daily events
I still can't figure out how to make it in Splunk.
| inputlookup hourlyaverage.csv
| where _time >= info_min_time AND _time <= info_max_time
| bin _time span=1d
| stats avg(responseTimeMilliseconds) AS responseTimeMilliseconds sum(numberOfEvents ) AS numberOfEvents by _time Service