Splunk Search

Excessive Firewall Denies query

Meena27
Explorer

I am trying to write a rule that fires if a single source IP creates 40 denied connections to at least 40 destinations in five minutes.

| stats count dc(dest) as dest_count, values(dest) as Dest by action, src, signature_id, dest_port | search dest_count>40 AND count > 40 | eval searchtimespanminutes=5

Could anyone tell me if using "searchtimespanminutes" is right and will this work? Or any suggestions would be much appreciated.

0 Karma
1 Solution

rsennett_splunk
Splunk Employee
Splunk Employee

searchtimespanminutes is a depricated time modifier that goes after earliest= at the start of your search (before the first pipe).
You can read about them here. http://docs.splunk.com/Documentation/Splunk/6.2.3/SearchReference/SearchTimeModifiers
So, first... you're not specifying a time modifier... and second, Splunk uses the time modifier to GET the data, and it isn't used as a filter afterwards... at least not like that. So right now, your eval just creates a field, with that name holding the value of 5 and it's not seen as having anything to do with the timespan...

As far as an alert is concerned... the way you look at things is a bit different regarding the search:

basically you're creating a search for the alert that will trigger under one of the following conditions:
The two that apply here are "number of results" and/or custom condition which would be search dest_count>40 AND count > 40 or you can leave search dest_count>40 in the search for context (when you look at it in a year) and have the condition be count>40.
There is a slightly different set of conditions for a real-time search...
So you're going to build a search that produces some number of results that the alert structure looks at and uses as a trigger.
Make sense?

If you're looking at historical data to find out if that condition has been met at all... in say, the past year... that's another story.

I have something like this watching firewall data from my router and those of a couple of colleagues:

index=syslog action="DROP" host="192.168.1.*"| timechart span=1h count | streamstats avg(count) as Count_Average stdev(count) as Standard_Deviation | eval Count_Average = round(Count_Average,0) | eval Standard_Deviation = round(Standard_Deviation,0) | where count>Count_Average+(2*Standard_Deviation) | rename count as Count

It runs over a span of 7 days and updates a dashboard... which basically shows "weird stuff that should be observed"

With Splunk... the answer is always "YES!". It just might require more regex than you're prepared for!

View solution in original post

rsennett_splunk
Splunk Employee
Splunk Employee

searchtimespanminutes is a depricated time modifier that goes after earliest= at the start of your search (before the first pipe).
You can read about them here. http://docs.splunk.com/Documentation/Splunk/6.2.3/SearchReference/SearchTimeModifiers
So, first... you're not specifying a time modifier... and second, Splunk uses the time modifier to GET the data, and it isn't used as a filter afterwards... at least not like that. So right now, your eval just creates a field, with that name holding the value of 5 and it's not seen as having anything to do with the timespan...

As far as an alert is concerned... the way you look at things is a bit different regarding the search:

basically you're creating a search for the alert that will trigger under one of the following conditions:
The two that apply here are "number of results" and/or custom condition which would be search dest_count>40 AND count > 40 or you can leave search dest_count>40 in the search for context (when you look at it in a year) and have the condition be count>40.
There is a slightly different set of conditions for a real-time search...
So you're going to build a search that produces some number of results that the alert structure looks at and uses as a trigger.
Make sense?

If you're looking at historical data to find out if that condition has been met at all... in say, the past year... that's another story.

I have something like this watching firewall data from my router and those of a couple of colleagues:

index=syslog action="DROP" host="192.168.1.*"| timechart span=1h count | streamstats avg(count) as Count_Average stdev(count) as Standard_Deviation | eval Count_Average = round(Count_Average,0) | eval Standard_Deviation = round(Standard_Deviation,0) | where count>Count_Average+(2*Standard_Deviation) | rename count as Count

It runs over a span of 7 days and updates a dashboard... which basically shows "weird stuff that should be observed"

With Splunk... the answer is always "YES!". It just might require more regex than you're prepared for!

woodcock
Esteemed Legend

I assume this is some "rules facility" inside of Splunk ES app, right?

0 Karma

Meena27
Explorer

I did try that... didnt work...

0 Karma
Get Updates on the Splunk Community!

New in Observability - Improvements to Custom Metrics SLOs, Log Observer Connect & ...

The latest enhancements to the Splunk observability portfolio deliver improved SLO management accuracy, better ...

Improve Data Pipelines Using Splunk Data Management

  Register Now   This Tech Talk will explore the pipeline management offerings Edge Processor and Ingest ...

3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud?

Register Join this Tech Talk to learn how unique features like Service Centric Views, Tag Spotlight, and ...