I want to compare pervious hour data with present hour data and get the percentage using below query.
|mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, app2, app3, app4) span=1h by application
My mistake - I neglected groupby.
I know this has come up before (because some veterans here helped me:-)) But I can't find the old answer. (In fact, this delta with groupby question comes up regularly because it's a common use case.) So, here is a shot:
|mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, app2, app3, app4) span=1h by application
| streamstats window=2 global=false range(Trans) as delta max(Trans) as Trans_max max(_time) as _time by application
| sort application _time
| eval delta = if(Trans_max == Trans, delta, "-" . delta)
| eval pct_delta = delta / Trans * 100
| fields - Trans_max
Here is my full simulation
| mstats max(_value) as Trans
where index=_metrics metric_name = spl.mlog.bucket_metrics.* earliest=-8h@h latest=-4h@h
by metric_name span=1h
| rename metric_name as application
``` the above simulates
|mstats sum(transaction) as Trans where index=host-metrics service=login
application IN(app1, app2, app3, app4) span=1h by application
```
| streamstats window=2 global=false range(Trans) as delta max(Trans) as Trans_max max(_time) as _time by application
| sort application _time
| eval delta = if(Trans_max == Trans, delta, "-" . delta)
| eval pct_delta = delta / Trans * 100
| fields - Trans_max
My output is
_time | application | Trans | delta | pct_delta |
2024-03-28 12:00 | spl.mlog.bucket_metrics.created | 0.000000 | 0.000000 | |
2024-03-28 13:00 | spl.mlog.bucket_metrics.created | 0.000000 | 0.000000 | |
2024-03-28 14:00 | spl.mlog.bucket_metrics.created | 0.000000 | 0.000000 | |
2024-03-28 15:00 | spl.mlog.bucket_metrics.created | 0.000000 | 0.000000 | |
2024-03-28 12:00 | spl.mlog.bucket_metrics.created_replicas | 0.000000 | 0.000000 | |
2024-03-28 13:00 | spl.mlog.bucket_metrics.created_replicas | 0.000000 | 0.000000 | |
2024-03-28 14:00 | spl.mlog.bucket_metrics.created_replicas | 0.000000 | 0.000000 | |
2024-03-28 15:00 | spl.mlog.bucket_metrics.created_replicas | 0.000000 | 0.000000 | |
2024-03-28 12:00 | spl.mlog.bucket_metrics.current_hot | 12.000000 | 0.000000 | 0.000000 |
2024-03-28 13:00 | spl.mlog.bucket_metrics.current_hot | 12.000000 | 0.000000 | 0.000000 |
2024-03-28 14:00 | spl.mlog.bucket_metrics.current_hot | 12.000000 | 0.000000 | 0.000000 |
2024-03-28 15:00 | spl.mlog.bucket_metrics.current_hot | 12.000000 | 0.000000 | 0.000000 |
2024-03-28 12:00 | spl.mlog.bucket_metrics.current_hot_replicas | 0.000000 | 0.000000 | |
2024-03-28 13:00 | spl.mlog.bucket_metrics.current_hot_replicas | 0.000000 | 0.000000 | |
2024-03-28 14:00 | spl.mlog.bucket_metrics.current_hot_replicas | 0.000000 | 0.000000 | |
2024-03-28 15:00 | spl.mlog.bucket_metrics.current_hot_replicas | 0.000000 | 0.000000 | |
2024-03-28 12:00 | spl.mlog.bucket_metrics.current_total | 215.000000 | 0.000000 | 0.000000 |
2024-03-28 13:00 | spl.mlog.bucket_metrics.current_total | 215.000000 | 0.000000 | 0.000000 |
2024-03-28 14:00 | spl.mlog.bucket_metrics.current_total | 214.000000 | -1.000000 | -0.4672897 |
2024-03-28 15:00 | spl.mlog.bucket_metrics.current_total | 214.000000 | 0.000000 | 0.000000 |
2024-03-28 12:00 | spl.mlog.bucket_metrics.frozen | 0.000000 | 0.000000 | |
2024-03-28 13:00 | spl.mlog.bucket_metrics.frozen | 0.000000 | 0.000000 | |
2024-03-28 14:00 | spl.mlog.bucket_metrics.frozen | 1.000000 | 1.000000 | 100.0000 |
2024-03-28 15:00 | spl.mlog.bucket_metrics.frozen | 0.000000 | -1.000000 | |
2024-03-28 12:00 | spl.mlog.bucket_metrics.rolled | 0.000000 | 0.000000 | |
2024-03-28 13:00 | spl.mlog.bucket_metrics.rolled | 0.000000 | 0.000000 | |
2024-03-28 14:00 | spl.mlog.bucket_metrics.rolled | 0.000000 | 0.000000 | |
2024-03-28 15:00 | spl.mlog.bucket_metrics.rolled | 0.000000 | 0.000000 | |
2024-03-28 12:00 | spl.mlog.bucket_metrics.total_removed | 0.000000 | 0.000000 | |
2024-03-28 13:00 | spl.mlog.bucket_metrics.total_removed | 0.000000 | 0.000000 | |
2024-03-28 14:00 | spl.mlog.bucket_metrics.total_removed | 1.000000 | 1.000000 | 100.0000 |
2024-03-28 15:00 | spl.mlog.bucket_metrics.total_removed | 0.000000 | -1.000000 |
Obviously my results have lots of nulls because lots of my "Trans" values are zero. But you get the idea.
Something like this?
|mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, app2, app3, app4) span=1h by application
| delta Trans as delta_Trans
| eval pct_delta_Trans = delta_Trans / Trans * 100
hi @yuanliu , when i run the below query, trans values are fine, but getting negative values and empty row for the delta_Trans and pct_delta_Trans fields
values are not correct.
_time | application | Trans | delta_Trans | pct_delta_Trans |
2022-01-22 02:00 | app1 | 3456.000000 | ||
2022-01-22 02:00 | app2 | 5632.000000 | -1839.000000 | -5438.786543 |
2022-01-22 02:00 | app3 | 5643.000000 | 36758.000000 | 99.76435678 |
2022-01-22 02:00 | app4 | 16543.00000 | -8796.908678 | -8607.065438 |
My mistake - I neglected groupby.
I know this has come up before (because some veterans here helped me:-)) But I can't find the old answer. (In fact, this delta with groupby question comes up regularly because it's a common use case.) So, here is a shot:
|mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, app2, app3, app4) span=1h by application
| streamstats window=2 global=false range(Trans) as delta max(Trans) as Trans_max max(_time) as _time by application
| sort application _time
| eval delta = if(Trans_max == Trans, delta, "-" . delta)
| eval pct_delta = delta / Trans * 100
| fields - Trans_max
Here is my full simulation
| mstats max(_value) as Trans
where index=_metrics metric_name = spl.mlog.bucket_metrics.* earliest=-8h@h latest=-4h@h
by metric_name span=1h
| rename metric_name as application
``` the above simulates
|mstats sum(transaction) as Trans where index=host-metrics service=login
application IN(app1, app2, app3, app4) span=1h by application
```
| streamstats window=2 global=false range(Trans) as delta max(Trans) as Trans_max max(_time) as _time by application
| sort application _time
| eval delta = if(Trans_max == Trans, delta, "-" . delta)
| eval pct_delta = delta / Trans * 100
| fields - Trans_max
My output is
_time | application | Trans | delta | pct_delta |
2024-03-28 12:00 | spl.mlog.bucket_metrics.created | 0.000000 | 0.000000 | |
2024-03-28 13:00 | spl.mlog.bucket_metrics.created | 0.000000 | 0.000000 | |
2024-03-28 14:00 | spl.mlog.bucket_metrics.created | 0.000000 | 0.000000 | |
2024-03-28 15:00 | spl.mlog.bucket_metrics.created | 0.000000 | 0.000000 | |
2024-03-28 12:00 | spl.mlog.bucket_metrics.created_replicas | 0.000000 | 0.000000 | |
2024-03-28 13:00 | spl.mlog.bucket_metrics.created_replicas | 0.000000 | 0.000000 | |
2024-03-28 14:00 | spl.mlog.bucket_metrics.created_replicas | 0.000000 | 0.000000 | |
2024-03-28 15:00 | spl.mlog.bucket_metrics.created_replicas | 0.000000 | 0.000000 | |
2024-03-28 12:00 | spl.mlog.bucket_metrics.current_hot | 12.000000 | 0.000000 | 0.000000 |
2024-03-28 13:00 | spl.mlog.bucket_metrics.current_hot | 12.000000 | 0.000000 | 0.000000 |
2024-03-28 14:00 | spl.mlog.bucket_metrics.current_hot | 12.000000 | 0.000000 | 0.000000 |
2024-03-28 15:00 | spl.mlog.bucket_metrics.current_hot | 12.000000 | 0.000000 | 0.000000 |
2024-03-28 12:00 | spl.mlog.bucket_metrics.current_hot_replicas | 0.000000 | 0.000000 | |
2024-03-28 13:00 | spl.mlog.bucket_metrics.current_hot_replicas | 0.000000 | 0.000000 | |
2024-03-28 14:00 | spl.mlog.bucket_metrics.current_hot_replicas | 0.000000 | 0.000000 | |
2024-03-28 15:00 | spl.mlog.bucket_metrics.current_hot_replicas | 0.000000 | 0.000000 | |
2024-03-28 12:00 | spl.mlog.bucket_metrics.current_total | 215.000000 | 0.000000 | 0.000000 |
2024-03-28 13:00 | spl.mlog.bucket_metrics.current_total | 215.000000 | 0.000000 | 0.000000 |
2024-03-28 14:00 | spl.mlog.bucket_metrics.current_total | 214.000000 | -1.000000 | -0.4672897 |
2024-03-28 15:00 | spl.mlog.bucket_metrics.current_total | 214.000000 | 0.000000 | 0.000000 |
2024-03-28 12:00 | spl.mlog.bucket_metrics.frozen | 0.000000 | 0.000000 | |
2024-03-28 13:00 | spl.mlog.bucket_metrics.frozen | 0.000000 | 0.000000 | |
2024-03-28 14:00 | spl.mlog.bucket_metrics.frozen | 1.000000 | 1.000000 | 100.0000 |
2024-03-28 15:00 | spl.mlog.bucket_metrics.frozen | 0.000000 | -1.000000 | |
2024-03-28 12:00 | spl.mlog.bucket_metrics.rolled | 0.000000 | 0.000000 | |
2024-03-28 13:00 | spl.mlog.bucket_metrics.rolled | 0.000000 | 0.000000 | |
2024-03-28 14:00 | spl.mlog.bucket_metrics.rolled | 0.000000 | 0.000000 | |
2024-03-28 15:00 | spl.mlog.bucket_metrics.rolled | 0.000000 | 0.000000 | |
2024-03-28 12:00 | spl.mlog.bucket_metrics.total_removed | 0.000000 | 0.000000 | |
2024-03-28 13:00 | spl.mlog.bucket_metrics.total_removed | 0.000000 | 0.000000 | |
2024-03-28 14:00 | spl.mlog.bucket_metrics.total_removed | 1.000000 | 1.000000 | 100.0000 |
2024-03-28 15:00 | spl.mlog.bucket_metrics.total_removed | 0.000000 | -1.000000 |
Obviously my results have lots of nulls because lots of my "Trans" values are zero. But you get the idea.
Hi @yuanliu , thank you so much, it worked
Hi @yuanliu , as suggested, I tried below query, but i am not getting expected output. I mean i am not getting previous hour data under delta row. all values are 0. please see my output.
|mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, app2, app3, app4) span=1h by application
|streamstats window=2 global=false range(Trans) as delta max(Trans) as Trans_max max(_time) as _time by application
|sort application _time
|eval delta = if(Trans_max == Trans, delta, "-" . delta)
|eval pct_delta = delta / Trans * 100
|fields - Trans_max
Output:
_time | application | Trans | delta_Trans | pct_delta_Trans |
2022-01-22 02:00 | app1 | 3456.000000 | 0.000000 | 0.000000 |
2022-01-22 02:00 | app2 | 5632.000000 | 0.000000 | 0.000000 |
2022-01-22 02:00 | app3 | 5643.000000 | 0.000000 | 0.000000 |
2022-01-22 02:00 | app4 | 16543.00000 | 0.000000 | 0.000000 |