Splunk Search

What is the best way to get the running average and standard deviations for external port connects?

packet_hunter
Contributor

So I am looking at cisco asa logs and wondering what the best way method would be to create an alert when the number of external connection attempts to port 23 (in my network) is +/- 2 standard deviations from the daily average.

Thank you

0 Karma
1 Solution

lguinn2
Legend

There are lots of ways to approach this - but the first question is: how do you define the daily average? Do you want to take into account the variability throughout the day? For example, perhaps the number of attempts averages 1000 at 09:00 but 500 at 22:00
Depending on your answer to this question, establishing your alert threshold on-the-fly could get pretty expensive. One solution would be to create a lookup table of the thresholds, perhaps like this:

index=x sourcetype=asa your search here earliest=-30d@d latest=@d
| eval hour=strftime(_time,"%H")
| bin _time span=d
| stats count by _time hour
| stats avg(count) as Average stdev(count) as StdDev by hour
| outputlookup thresholds_lookup

Schedule this search to run once a day, to update the thresholds. Or just manually create a lookup table where you specify what you want for a threshold. Now you don't need to calculate the average or std deviation repeatedly, you can just look it up.
Then, create a search that actually alerts you - maybe run it once an hour:

index=x sourcetype=asa your search here earliest=-1h@h latest=@h
| bin _time span=1h
| stats count by _time
| eval hour=strftime(_time,"%H")
| lookup thresholds_lookup hour OUTPUT Average StdDev
| where count < (Average - (2*StdDev)) OR count > (Average + (2*StdDev))
| table _time count Average StdDev

And alert when the number of results > 0

I hope this gives you some ideas. Oh and here is the documentation for the outputlookup command to get you started...

View solution in original post

lguinn2
Legend

There are lots of ways to approach this - but the first question is: how do you define the daily average? Do you want to take into account the variability throughout the day? For example, perhaps the number of attempts averages 1000 at 09:00 but 500 at 22:00
Depending on your answer to this question, establishing your alert threshold on-the-fly could get pretty expensive. One solution would be to create a lookup table of the thresholds, perhaps like this:

index=x sourcetype=asa your search here earliest=-30d@d latest=@d
| eval hour=strftime(_time,"%H")
| bin _time span=d
| stats count by _time hour
| stats avg(count) as Average stdev(count) as StdDev by hour
| outputlookup thresholds_lookup

Schedule this search to run once a day, to update the thresholds. Or just manually create a lookup table where you specify what you want for a threshold. Now you don't need to calculate the average or std deviation repeatedly, you can just look it up.
Then, create a search that actually alerts you - maybe run it once an hour:

index=x sourcetype=asa your search here earliest=-1h@h latest=@h
| bin _time span=1h
| stats count by _time
| eval hour=strftime(_time,"%H")
| lookup thresholds_lookup hour OUTPUT Average StdDev
| where count < (Average - (2*StdDev)) OR count > (Average + (2*StdDev))
| table _time count Average StdDev

And alert when the number of results > 0

I hope this gives you some ideas. Oh and here is the documentation for the outputlookup command to get you started...

packet_hunter
Contributor

Thank you for the reply.

I initially started with:

index=main sourcetype=cisco:asa  dest_port=23 action=blocked  direction=Inbound | timechart  span=1d count as D_Count | appendpipe [stats avg(D_Count) as D_AVG] | appendpipe [stats stdev(D_Count) as SDev by _time] | table *

and could not figure out the alert....
but your code is way better....

Thank you Lisa!!

0 Karma
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...