Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Splunk Search

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Splunk Answers
- :
- Using Splunk
- :
- Splunk Search
- :
- How can I run a search that will use data from buc...

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark Topic
- Subscribe to Topic
- Mute Topic
- Printer Friendly Page

- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

viggor

Path Finder

09-28-2017
02:59 PM

Given a timeinterval provided by the user, I would like to output those buckets who contain more elements than the average of the 50 non-empty buckets before a bucket.

Is there an easy way of doing this?

1 Solution

- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

DalJeanis

SplunkTrust

09-28-2017
03:50 PM

We can interpret your question a number of ways. Here's one easy way that illustrates what you want to know.

```
index=foo earliest=-60m@m latest=@m
| bin _time span=10s
| rename COMMENT as "The above gives you 360 time buckets at 10s each"
| rename COMMENT as "roll up each bin, and calculate the average for up to 50 bins before it"
| stats count as mycount by _time
| streamstats current=f count as recno avg(mycount) as avgcount window=50
| rename COMMENT as "throw away the first 50 bins and any bins less than average"
| where (recno>=50) AND (mycount>avgcount)
```

Also, please remember that, by definition, about half of all bins are going to contain more events than average. If you get no results from the above, then change to `window=51`

. (It means I was wrong about whether `current=f window=50`

meant the last 50 or the last 49 events.)

If you are looking for bins that are *significantly* above average, then you might want to consider using "average plus some number of standard deviations".

```
| streamstats current=f count as recno avg(mycount) as avgcount stdev(mycount) as stdevcount window=50
| rename COMMENT as "throw away the first 50 bins and any bins less than average plus 2 stdevs"
| where (recno>=50) AND (mycount>avgcount+2*stdevcount)
```

- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

DalJeanis

SplunkTrust

09-28-2017
03:50 PM

We can interpret your question a number of ways. Here's one easy way that illustrates what you want to know.

```
index=foo earliest=-60m@m latest=@m
| bin _time span=10s
| rename COMMENT as "The above gives you 360 time buckets at 10s each"
| rename COMMENT as "roll up each bin, and calculate the average for up to 50 bins before it"
| stats count as mycount by _time
| streamstats current=f count as recno avg(mycount) as avgcount window=50
| rename COMMENT as "throw away the first 50 bins and any bins less than average"
| where (recno>=50) AND (mycount>avgcount)
```

Also, please remember that, by definition, about half of all bins are going to contain more events than average. If you get no results from the above, then change to `window=51`

. (It means I was wrong about whether `current=f window=50`

meant the last 50 or the last 49 events.)

If you are looking for bins that are *significantly* above average, then you might want to consider using "average plus some number of standard deviations".

```
| streamstats current=f count as recno avg(mycount) as avgcount stdev(mycount) as stdevcount window=50
| rename COMMENT as "throw away the first 50 bins and any bins less than average plus 2 stdevs"
| where (recno>=50) AND (mycount>avgcount+2*stdevcount)
```

- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

viggor

Path Finder

09-28-2017
05:57 PM

I have two questions:

1) In your solution, I don't see a condition on the 50 buckets that requires them to be non-empty.

2) related to this, does `earliest=-60m@m`

limit the timerange from which the 50 non-empty buckets are chosen to the last hour? (I could be that there are fewer than 50 such buckets in the last hour.)

- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

DalJeanis

SplunkTrust

09-28-2017
09:08 PM

@viggor - (1) the `stats`

command will not create records unless there is at least one event in the _time range. If you were using a more complicated criteria, you could drop records immediately after the `stats`

.

(2) As part of interpreting the question, we had to make some arbitrary assumptions. One of them was that 360 10-second buckets would be enough to get at least 50 non-empty buckets.

Mostly we were trying to give you a working base search that you could update using whatever assumptions are closer to the characteristics of your actual data.

Happy Splunking!

- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

viggor

Path Finder

09-29-2017
07:15 AM

`where`

statement, i.e., `| where (recno>=50)`

, is this to exclude buckets for which there are less than 50 non-empty buckets preceding it? Put differently, if I knew that every bucket always has at least 50 preceding buckets, could I drop that condition?

- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

viggor

Path Finder

09-29-2017
07:39 AM

Also on line 7:

`| streamstats current=f count as recno avg(mycount) as avgCount stdev(mycount) as stDevCount window=120`

shouldn't the average be taken over `count`

instead of over `mycount`

?

- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

DalJeanis

SplunkTrust

09-29-2017
11:00 AM

@viggor - best practices are to always rename the field `count`

, to avoid confusing yourself or splunk. Thus, `count`

in a command always refers to the count being calculated in that current command, and it is never possible to average it or anything else. In this case, `mycount`

is the `count`

created in line 6 for a particular _time bucket.

If you change `streamstats`

to `window=120`

, then you are averaging the last 120 non-zero buckets. Is that what you want?

The first 50 non-zero buckets can never have 50 non-zero buckets before them to average out. You do not HAVE to drop them... normally I would just start the job with `earliest=`

far enough back that I could throw away the first 50 records and still have the range that I was actually looking for.

For instance, if I typically had about 75 nonzero buckets per hour, and if I wanted a 2 hour window, then I would start the window a little more than 40 minutes earlier than my window. (Probably 60 just to be sure.)

As long as you are doing that, then you don't really need to check for the 50. If you are checking against the average that happens to be only 45 events, the difference is not that critical. Just average the events starting at a duration before the desired range that you estimate should usually contain at least 50 nonzero buckets, and then after calculating averages, throw away all buckets before the desired range.

Get Updates on the Splunk Community!

Unleash the power of Splunk Observability
Watch Now
In this can't miss Tech Talk! The Splunk Growth ...

Hello Splunk Community!
In March, Splunk Community Office Hours spotlighted our fabulous Splunk Threat ...

Take a look below to explore our upcoming Community Office Hours, Tech Talks, and Webinars in April. This post ...