Hi,
I'm trying to find new ports that are opened up on a system where I have 24 hours of existing data.
sourcetype=Unix:ListeningPorts | join host [search sourcetype=Unix:ListeningPorts | bucket span=1h _time | dedup _time,host | stats count AS hostcount by host | table host,hostcount] | bucket span=1h _time | dedup host,dest_port,_time | stats values(hostcount) AS hostcount count as num_data_samples by host,dest_port | where hostcount = 24 AND num_data_samples = 1
The search takes few minutes to complete and i'm trying to get the time down to run as an alert. I believe the subsearch is causing the delay, but i'm not sure how to get the number of times a host reported in a 24 hour period. If I don't use the subsearch then I can't do the dedup to remove the multiple entries per hour (multiple dest_port) for a host.
Does anyone have any suggestions on how to make this better?
TIA
Give this a try
sourcetype=Unix:ListeningPorts | bucket span=1h _time | dedup host,dest_port,_time | stats dc(_time) AS hostcount count as num_data_samples by host,dest_port | where hostcount = 24 AND num_data_samples = 1
Give this a try
sourcetype=Unix:ListeningPorts | bucket span=1h _time | dedup host,dest_port,_time | stats dc(_time) AS hostcount count as num_data_samples by host,dest_port | where hostcount = 24 AND num_data_samples = 1
This is an awesome idea!
Yep, same conclusion I came to.
The inner query was not calculating a hostcount, but an hourcount for how many hours a host was active over the time period.
However, I'm not sure what the business meaning of "I have 24 hours of data " is.
Do you mean, "Which have been running every hour for the past 24 hours"? or "Which I have been monitoring for at least 24 hours?"
sourcetype=Unix:ListeningPorts
| table _time, host, dest_port
| bin _time span=1h
| eventstats dc(_time) as OpHours, min(_time) as FirstHostEvent by host
| stats min(_time) as FirstPortEvent, max(OpHours) as OpHours, max(FirstHostEvent) as FirstHostEvent by host, dest_port
| table host, FirstHostEvent, OpHours, dest_port, FirstPortEvent
| sort 0 host dest_port
| where FirstHostEvent < relative_time(now(),"-1d") AND FirstPortEvent > relative_time(now(),"-1h")
The last comparison assuming an hourly scan.
This strategy is going to throw an alert whenever the last use of a port scrolls off the time horizon that you are searching. Might want to merge in an inputcsv and send out an outputcsv with the known hosts and ports.
Ugh, good point. I'll have to think that through.
Thanks again!!!
I believe this approach would account for systems where applications are being decommissioned. Here i'm only looking for ports = 1 and the last recorded time was within the last hour from the end of the search. Does this look right?
|pivot SecOps__Listening_Ports Unix_Listening_Ports SPLITROW _time, SPLITROW host, SPLITROW dest_port | eventstats last(_time) AS maxtime | bucket span=1h _time | dedup host,dest_port,_time,maxtime | stats values(maxtime) AS maxtime last(_time) AS lasttime dc(_time) AS hostcount count as portcount by host,dest_port | where hostcount >= 24 AND portcount = 1 AND lasttime >= relative_time(maxtime, "-1h")
Sorry, also switched it to a pivot table.
I believe you missed field OpHours and FirstHostEvent to be included in stats.
crud, you're right.
Hey, I added them using eventstats, so they're scrolled off to the right in my head from all the other fields. 😉
Have been monitored for at least 24 hours.
I wanted to avoid false positives for systems being turned up in an active environment.
so simple!!