Splunk Search

How to search the number of times an IP address comes up in our network traffic over different time ranges?

phspec
Explorer

I'm searching for how frequently an IP address comes up in our network traffic during a 30, 30-60-60-90- and 90-120 day period. My search looks like the one below:

index=networkTraffic | stats count(Dst_IP) by Dst_IP
0 Karma
1 Solution

somesoni2
Revered Legend

Try like this

index=networkTraffic | bucket _time span=30d | stats count(Dst_IP) as count by _time Dst_IP | eval day=floor((now()-_time)/86400) | eval Period=tostring(day)."-".tostring(day+30) | chart values(count) over Dst_IP by Period

View solution in original post

somesoni2
Revered Legend

Try like this

index=networkTraffic | bucket _time span=30d | stats count(Dst_IP) as count by _time Dst_IP | eval day=floor((now()-_time)/86400) | eval Period=tostring(day)."-".tostring(day+30) | chart values(count) over Dst_IP by Period

martin_mueller
SplunkTrust
SplunkTrust

No, that's no valid syntax. Look at my April 12th comment below.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

That should work, provided there's no daylight savings time adding an hour.

The relative_time() approach I posted further down should survive daylight savings time oddities.

0 Karma

phspec
Explorer

I've done the search with 120d@d, 120d, and relative_time(now(), "-120d"). All of them return a '120-150' column, so I don't believe that column is being returned because of the time range the search is being executed on. I believe the '120-150' column is being made because of this line, 'eval Period=tostring(day)."-".tostring(day+30)', so I'm trying to edit my search to include, 'eval if(day<=120 , Period=tostring(day)."-".tostring(day+30), "NULL")', but I keep getting errors due to incorrect syntax. Any help is appreciated.

0 Karma

phspec
Explorer

so the time section of my search should look like, earliest=relative_time(-120d)

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

The time range -120d@d contains things older than 120 days - snap to start of day. Those get sorted into the 120-150 bin.

0 Karma

phspec
Explorer

should I just do -120d instead of -120d@d then?

0 Karma

phspec
Explorer

I get a "120-150" day column even though my earliest=-120d@d. I've also tried earliest=-120d. Could you possibly point towards why I'm getting an extra 5th column? My search is below

index=networkTraffic earliest=-120d@d | dedup source | bucket _time span=30d | stats count(VDst_IP) as count by _time Dst_IP | eval day=floor((now()-_time)/86400) | eval Period=tostring(day)."-".tostring(day+30) | chart values(count) over Dst_IP by Period | Sort by Dst_IP
0 Karma

twinspop
Influencer

There likely is a better way to do this, but it works.

index=networkTraffic earliest=-120d | stats count count(eval(now()-_time<(86400*30))) as 30d count(eval(now()-_time>(86400*30) and now()-_time<(86400*60))) as "30d-60d" count(eval(now()-_time>(86400*60) and now()-_time<(86400*90))) as "60d-90d" count(eval(now()-_time>(86400*90))) as "90d-120d" by Dst_IP

twinspop
Influencer

Yeah, I was thinking more in the "arbitrary" ranges mindset without realizing he had requested 30 day buckets. 🙂

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

I'd separate the time-binning and stats-ing into two steps:

index=networkTraffic earliest=-120d | eval bin = case(_time >= relative_time(now(), "-30d"), "0-30", _time >= relative_time(now(), "-60d"), "30-60", _time >= relative_time(now(), "-90d"), "60-90", 1=1, "90-120") | stats count by bin Dst_IP

To make this run at reasonable speed, you'll want to define an accelerated data model for your network traffic, or fill a summary index with daily counts by IP.

Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...