Splunk Search

How to get stdev and avg from a multi column timechart for eventflow trends

wlcv
Observer

Hello!

I want to compare my event flow rate from the benchmark (last 21 - last 7 days [14 days in total] to the last 7 days to determine if there are any abnormal activities or to determine how my flow is trending. My process is to take the averages of the baseline and current along with the standard deviation to determine the zscore and work from that.

The problem s that the search takes ~290 seconds and I'm hoping to see if there are any efficiencies that can be performed to make this work better.

| tstats count where index=* earliest=-21d@d latest=-7d@d by _time index span=1h summariesonly=t 
| timechart span=1h sum(count) as count by index useother=f 
| fillnull value=0 
| stats avg(*) as * 
| transpose 0 
| rename "row 1" as avg column as index 
| appendcols 
    [| tstats count where index=* earliest=-21d@d latest=-7d@d by _time index span=1h summariesonly=t 
    | timechart span=1h sum(count) as count by index useother=f 
    | fillnull value=0 
    | stats stdev(*) as * 
    | transpose 0 
    | rename "row 1" as stdev column as index ] 
| appendcols 
    [| tstats count where index=* earliest=-7d@d latest=@h by _time index span=1h summariesonly=t 
    | timechart span=1h sum(count) as count by index useother=f 
    | fillnull value=0 
    | stats avg(*) as * 
    | transpose 0 
    | rename "row 1" as current column as index ] 
| eval z = (avg - current)/stdev

Initially, instead of appendcols I used join but it seems appendcols is slightly faster. Part of the issue is that I can't get the stdev and avg in a single table easily (examples of my issues below).

example:

| tstats count where index=* earliest=-1d@d latest=@d by _time index span=1h summariesonly=t 
| timechart span=1h sum(count) as count by index useother=f

_time index1 index2 index3 index4 index5 index6 index7 index8 index9 inedex10
2019-09-29 00:00 114929 109862 5447236 598915 101984 93383 1134374 3218677 135260 271187
2019-09-29 01:00 113735 94834 8043144 500234 101288 93374 1683179 3212936 268802 277495
2019-09-29 02:00 114069 99818 9472714 460066 99908 93260 1632044 3241514 857129 268001
...

If I use the following search, I can't seem to get the stdev and avg sorted by the index. Using stats avg(*) as avg_* stdev(*) by stdev_* by index gets me no results... And the following doesn't help with getting the avg/stdev by the index respectively so I had to resort to using the transpose 0 lines and joining the different searches together

| tstats count where index=* earliest=-1d@d latest=@d by _time index span=1h summariesonly=t 
| timechart span=1h sum(count) as count by index useother=f 
| fillnull value=0 
| stats avg(*) as avg_* stdev(*) as stdev_*

avg_index1 avg_index2 avg_index3 ... stdev_index1 stdev_index2 stdev_index3 ...
79202.82857857 72468.4 7022379.3 ... 20705.25571 74106.285571 63430.8 ...

Please let me know what else I can try to pretty this up. Thanks in advance!

0 Karma

wlcv
Observer

I should also mention the ultimate goal is to have the data look like:

        avg     current     stdev   z
index1  1       2           3           3
index2  2       3           3           4
0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Index This | What travels the world but is also stuck in place?

April 2026 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Discover New Use Cases: Unlock Greater Value from Your Existing Splunk Data

Realizing the full potential of your Splunk investment requires more than just understanding current usage; it ...

Continue Your Journey: Join Session 2 of the Data Management and Federation Bootcamp ...

As data volumes continue to grow and environments become more distributed, managing and optimizing data ...