I'm having trouble understanding the math rules on the search line, so instead of continuing to guess what might work and having splunk tell me my formatting is wrong, or give me non sensical results, I'm just going to ask for help.
I have a lot of data that is aggregated in to 1 minute bins. For each 1 minute bins I have
x_i: mean value of my variable as measured in the 1 minute bin
n_i: number of times I measured it in the 1 minute bin
sigma_i: standard deviation of the variable measured in the 1 minute bin
I'd then like to move to 1 hour bins so I want to calculate a weighted average of my one minute bins. Mathematically this is very straight forward
x_ weighted_ average = sum(weight_ i * x_ i) / sum(weight_i)
weight_ i = 1 / error_ i^2
and for counting statistics
error_ i = stand_ deviation_ i / sqrt(n_ i)
putting that together we get
x_ weighted_ average = sum (x_ i * n_ i / sigma_ i^2) / sum(n_ i)
What is the best way to do this kind of math in the search line while aggregating? Or should I just pass it to a python script and back out?
I'd still do this in the search language but it's up to you. You break it down into individual eval and stats and bin statements. For some complex stuff you need streamstats and/or eventstats as well, or other more advanced commands, but here it's just a lot of brute force eval and stats.
foo | bin _time span="1min"
| stats avg(var1) as x_i count as n_i stdev(var1) as sigma_i by _time
| eval error_i=sigma_i/sqrt(n_i)
| eval weight_i=1/(error_i*error_i)
| eval weight_times_count = weight_i * x_i
| stats sum(weight_times_count) as sum_wtc sum(weight_i) as sum_weight
| eval x_weighted_average=sum_wtc / sum_weight