Splunk Search

Search for a snapshot count total for the last 20 mins on every hour

FC50
Path Finder

I'm looking to do a search that captures a snapshot of how many devices from certain subnets we have had going through our firewall in the last 20 minutes but for each preceding hour in the day. The basic search for this is pretty straightforward:    

index="firewall_std" src="10.19*.*.*" | dedup src 

with the time selected as last 20 mins

But for example if I run this search at 2pm with the time selected for today I want if possible to bring back what the count total was for the last 20 mins at 1pm, 12pm, 11am, 10am, 9am... etc.

I've achieved it by appending another search for each hour and -20m (see below) but was wondering if there was a easier more streamlined way to do this? 

| append [search index="firewall_std" src="10.19*.*.*"  earliest=-1h@m-20m latest=-1h@m | dedup src | stats min(_time) as _time count as Count | eval hour="02" | fields hour, _time, Count ]

 

 

 

Labels (2)
0 Karma
1 Solution

FC50
Path Finder

fyi:

update - solved this with the following command in the end:

index="index="cp_collect" src="10.197.*.*" | timechart dc(src) span=16m

View solution in original post

0 Karma

FC50
Path Finder

fyi:

update - solved this with the following command in the end:

index="index="cp_collect" src="10.197.*.*" | timechart dc(src) span=16m

0 Karma

FC50
Path Finder

Thanks for your responses they have helped expand my knowledge of Splunk. 

The last 15-20 min snapshot count every hour is important because it will allow us get a rolling total number of machines we had active on a single subnet at a particular time.

One of the challenges with this is we need to dedup the src ip address for every snapshot count so that we get an accurate figure of individual machines. When the time picker is set to today for example this dedup gets accurate results for the last 20mins but after that the count total declines.

 

0 Karma

FC50
Path Finder

Thanks for responses. It might be down to my beginner level Splunk skills but I'm not seeing any results being returned with the search idea. I'm also not sure but would the Timewrap command be flexible enough to capture the 20 minute window of data every hour? I had a play around with it and had no joy. 

It might be useful to show the search that is working for me at the moment:

index="firewall_std"" src="10.19*.*.*" earliest=-2h@m-20m latest=-2h@m | dedup src | stats min(_time) as _time count as Count | eval Hour="-02" | fields Hour, _time, Count
| append [search index="firewall_std" src="10.19*.*.*" earliest=-1h@m-20m latest=-1h@m | dedup src | stats min(_time) as _time count as Count | eval Hour="-01" | fields Hour, _time, Count ]
| append [search index="firewall_std" src="10.19*.*.*" earliest=-20m@m latest=@m latest=-20m | dedup src | stats min(_time) as _time count as Count | eval Hour="00" | fields Hour, _time, Count ]

Unfortunately this isn't scalable or flexible as I have to append a new search for each hour of data gathered. Is it possible to amend this so that I can run a new search at any time without the clunkiness of having to  append new searches within the main search?  

0 Karma

mattymo
Splunk Employee
Splunk Employee

Timechart, Timewrap, Streamstats all provide some cool time travel tricks that will allow us to bin and control time with less spl effort than using stats. I think your answer lies within one or more of them depending on the "why" of your use case. Also it sounds like this should ultimately be a dashboard powered off "macros", a report vs an ad-hoc search, so be sure to check those concepts out. 

If ultimate control is needed, we can do this manually with stats and the "|bin _time".  As the other example provided shows, Splunk let's you bend time like Doc and Marty McFly!

I always start with Timewrap, because I feel most folks end up trying to implement similar and I can make you dangerous faster :). Plus I'm visual and it helped me conceptualize how Splunk can use time.

Here's an example. 

Counting over time is pretty easy using  timechart vs stats. Whats the diff? Timechart implements time for us in the command. Because you care about 20m. I will use the span flag to bin 20m buckets with a time picker of "Today". Also using "dc()" does the dedup for us. Another way to dedup would be to use a split by in your stats vs the "dedup" command. 

 

 

 

 

```Search for access events from specific subnet. Protip: splunk understands CIDR```
index=k8s pod="istio-ingressgateway-757f95b7d9-whsz7"  forwarded_for="66.249.0.0/16"
```Use timechart to draw a timeseries, using span to control bins of time, and partial to instuct Splunk to only show complete buckets of time.```
| timechart span=20m partial=f dc(forwarded_for) AS Count

 

 

 

 

mattymo_1-1614695938561.png

Without knowing more about why the 20 minute loopback at the top of the hour is important to you, let's just graph them all and we can filter down to time slices later in the pipe. 

 

 

 

 

```Search for access events from specific subnet. Protip: splunk understands CIDR```
index=k8s pod="istio-ingressgateway-*" forwarded_for="66.249.0.0/16"
```Use timechart to draw a timeseries, using span to control bins of time, and partial to instuct Splunk to only show complete buckets of time.```
| timechart span=20m partial=f dc(forwarded_for) AS Count
```timewrap 1h to lay the time bins over each other automagically. "series" flag just makes the fields easier to work with```
| timewrap 1h

 

 

 

 

mattymo_0-1614697722601.png

As you can see timewrap got me from timechart to wrapped timechart for each hour of "Today" with very little effort.

From here we should be able to select the time bucket that provides the number we want...in other words the rows in the results that show the top of each hour. 

Now...here's where it can go from 0 to 100 real quick...I can then take these time series and iterate over them...I am not saying you need this here...but just to show you how powerful this can be. Especially if your next answers posts is going to be "how do I automate the analysis of these values over time"  😉

 

 

 

 

```use short series name on this example```
| timewrap 1h series=short
```rename the first field to "now"```
| rename Count_s0 AS now 
```foreach field, eval a new field by calculating a delta when compared to now```
| foreach Count_s*
    [ eval d<<MATCHSTR>> = now - <<FIELD>>] 
```then use the superpower called streamstats to calculate analytics on the series. I will use a window of 24 because I believe that will be my max and Splunk will probably just to the right thing...lol```
| streamstats window=24 median(d*) as median_*
```review your fields```
| table _time d* Count_*

 

 

 

 

In this example I look back 24 values and calculate a median for each series.

PAUSE!

That's a lot...now...before we go further I am down a rabbit hole in my own data (default index time fields on Http Event Collector Data) but I digress....I am going to try the other solution provided as well or maybe jam some of that into my answer...but let me know if this getting close to your ultimate goal.

I'll add a streamstats version later....which simply implements a more scalable split by option..see my GitHub example. 

- MattyMo

tread_splunk
Splunk Employee
Splunk Employee

I've seen elsewhere this technique of adapting _time to suit your purpose and it seems to fit here...

 

index="firewall_std"" src="10.19*.*.*"
| addinfo 
| eval BeyondTheHour=info_search_time%3600, _time=_time-(floor(info_search_time%3600+1)) 
| timechart count, max(BeyondTheHour) as BeyondTheHour span=20m 
| eval date_minute=strftime(_time,"%M") 
| where date_minute=40 
| eventstats max(BeyondTheHour) as BeyondTheHour 
| eval startTime=strftime(_time+BeyondTheHour,"%d/%m/%Y %H:%M:%S.%3Q"),endTime=strftime(_time+BeyondTheHour+1200,"%d/%m/%Y %H:%M:%S.%3Q") 
| table startTime endTime count

 

The query calculates how-many-seconds-past-the-hour you ran the search (eval BeyondTheHour=info_search_time%3600), winds _time back on all the events by that amount, so now your last 20minutes is represented as the last 20minutes of the previous hour (took me some time to get my head around this so you might need some patience here).  Now you can use regular timechart to count the number of events per 20minute span.  You want to compare (what is now) the last 20minutes for each hour so remove the others (where date_minute=40).  Create a startTime and endTime label for each of your results which adds back in the "BeyondTheHour" factor.

Run the query for any time period you like.  Try it with today, yesterday, Last 7 days etc.  The "last 20 minute" window will always be based upon the time at which you run the query.  And it'll always assume you want a 20minute window.  If you want auto change this (last 30minutes for example), you need to adjust both the "span=20m" element in the timechart command and the calculation for endTime (penultimate line).  Currently I hard code "+1200", which represents your 20 minute requirement (20*60).

0 Karma

tread_splunk
Splunk Employee
Splunk Employee

Tidied up line 3...

 

| eval BeyondTheHour=info_search_time%3600, _time=_time-BeyondTheHour 
0 Karma

mattymo
Splunk Employee
Splunk Employee

Hi, have you tried the "timewrap" command? Its pretty rad!

https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Timewrap

Also i have some old outlier spl you can play with from a few years back. I am guessing you are trying to analyze "normal"?

https://github.com/matthewmodestino/outliers

- MattyMo
0 Karma

mattymo
Splunk Employee
Splunk Employee

Hi, have you tried the "timewrap" command? Its pretty rad!

https://docs.splunk.com/Documentation/Splunk/8.1.2/SearchReference/Timewrap

Also i have some old outlier spl you can play with from a few years back. I am guessing you are trying to analyze "normal"?

 

https://github.com/matthewmodestino/outliers

- MattyMo
0 Karma

tread_splunk
Splunk Employee
Splunk Employee

I'm playing around with the following.  I've used index=_internal, replace that with your search.

index=_internal 
| eval date_minute=strftime(_time,"%M"), date_hour=strftime(_time,"%H") 
| eval stop_min=strftime(now(),"%M"),start_min=strftime(now()-12000,"%M") 
| where date_minute>start_min AND date_minute<stop_min 
| stats count by date_hour

I've assumed you always want to do "Last 20minutes".  You can't change that without amending the query (line 3 - the 12,000 value).  Clearly line 2 may not be necessary - you may be getting those fields extracted automatically.  In which case remove it.  From the time picker, choose "today" or "yesterday" or a specific day from "Date Range". 

What do you want to do if you run the search at, for example,  10mins past an hour? i.e. the "last 20mins" crosses back into the previous hour?  At the moment this confuses the results because some of the last 20minutes is associated with one hour and the rest with the previous hour.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...