Hi, I have a search that shows the output of traffic as sum(sentbyte)
This is my search, names have been changed to protect the guilty:
________________________________________________
index=netfw host="firewall"
srcname IN (host1,host2,host3...)
action=allowed dstip=8.8.8.8
| eval mytime=strftime(_time,"%Y/%m/%d %H %M")
| stats sum(sentbyte) by mytime
________________________________________________
The results show the peak per minute, which I can graph with a line chart, and they range up to 10,000,000.
I have tried to set up the alerting when the sum(sentbyte) is over 5,000,000 but cannot get it to trigger.
My alert is set to custom:
| stats sum(sentbyte) by mytime > 5000000
I me be on the wrong track for what I am trying to do but have spent many hours going in circles with this one. Any help is greatly appreciated
Hi @Drewprice,
there's some conceptual and a logical errors in your search:
at first you have to define a time period for the check, e.g. every 10 minutes, otherwise there's no sense to use _time in your search.
the second error is that you don't need to transform the timestamp in human readable.
At least, but this is an interpretation of mine, why do you want to calculate the peek? usually it's calculated the amount of sent bytes in a period, and anyway you use the sum function so you don't calculate the peak (for the peak you should use max)
So you should try something like this:
if you want to trigger an alert if the amount of bytes in one minute is more than , you should run something like this:
index=netfw host="firewall" srcname IN (host1,host2,host3...) action=allowed dstip=8.8.8.8
| timechart sum(sentbyte) AS count span=1m
| where count>5000000
or
index=netfw host="firewall" srcname IN (host1,host2,host3...) action=allowed dstip=8.8.8.8
| bin span=1m _time
| stats sum(sentbyte) AS count BY _time
| where count>5000000
Ciao.
Giuseppe
As @gcusello already pointed out, if working with _time it's usually (there are some use cases against it but they are rare) good do leave it as a unix timestamp throughout your whole search pipeline and only render it to human-readable text at the end for presentation. (you can also use fieldformat to keep the data in machine-convenient form but present the time to the user as a formatted string - that's my preferred approach).
The question is what kind of data you actually have and how your firewall reports traffic on an ongoing connection. Some firewalls (for example Juniper) give you an event on flow creation and on flow closing with just one value on session close giving you summarized traffic across the whole flow. Other firewalls can give you "keep-alive" events on already established sessions providing you with differential traffic updates (but some can also give you aggregated traffic over the whole session).
So it's not that obvious how to query for that data.
Also if you have your data normalized into CIM datamodel and your datamodel accelerated, you could use that datamodel to make your searches way way faster.
Hi @Drewprice,
there's some conceptual and a logical errors in your search:
at first you have to define a time period for the check, e.g. every 10 minutes, otherwise there's no sense to use _time in your search.
the second error is that you don't need to transform the timestamp in human readable.
At least, but this is an interpretation of mine, why do you want to calculate the peek? usually it's calculated the amount of sent bytes in a period, and anyway you use the sum function so you don't calculate the peak (for the peak you should use max)
So you should try something like this:
if you want to trigger an alert if the amount of bytes in one minute is more than , you should run something like this:
index=netfw host="firewall" srcname IN (host1,host2,host3...) action=allowed dstip=8.8.8.8
| timechart sum(sentbyte) AS count span=1m
| where count>5000000
or
index=netfw host="firewall" srcname IN (host1,host2,host3...) action=allowed dstip=8.8.8.8
| bin span=1m _time
| stats sum(sentbyte) AS count BY _time
| where count>5000000
Ciao.
Giuseppe
Thank you so much! I was going down that track but could not put it together.
Hi @Drewprice ,
good for you, see next time!
Ciao and happy splunking
Giuseppe
P.S.: Karma Points are appreciated by all the contributors 😉