All Apps and Add-ons

How to improve the performance of my search to find new opened ports

wweiland
Contributor

Hi,

I'm trying to find new ports that are opened up on a system where I have 24 hours of existing data.

sourcetype=Unix:ListeningPorts | join host [search sourcetype=Unix:ListeningPorts | bucket span=1h _time | dedup _time,host | stats count AS hostcount by host | table host,hostcount] | bucket span=1h _time | dedup host,dest_port,_time | stats values(hostcount) AS hostcount count as num_data_samples by host,dest_port | where hostcount = 24 AND num_data_samples = 1

The search takes few minutes to complete and i'm trying to get the time down to run as an alert. I believe the subsearch is causing the delay, but i'm not sure how to get the number of times a host reported in a 24 hour period. If I don't use the subsearch then I can't do the dedup to remove the multiple entries per hour (multiple dest_port) for a host.

Does anyone have any suggestions on how to make this better?

TIA

0 Karma
1 Solution

somesoni2
Revered Legend

Give this a try

sourcetype=Unix:ListeningPorts | bucket span=1h _time | dedup host,dest_port,_time | stats dc(_time) AS hostcount count as num_data_samples by host,dest_port | where hostcount = 24 AND num_data_samples = 1

View solution in original post

somesoni2
Revered Legend

Give this a try

sourcetype=Unix:ListeningPorts | bucket span=1h _time | dedup host,dest_port,_time | stats dc(_time) AS hostcount count as num_data_samples by host,dest_port | where hostcount = 24 AND num_data_samples = 1

tred23
Path Finder

This is an awesome idea!

0 Karma

DalJeanis
Legend

Yep, same conclusion I came to.

The inner query was not calculating a hostcount, but an hourcount for how many hours a host was active over the time period.

However, I'm not sure what the business meaning of "I have 24 hours of data " is.

0 Karma

DalJeanis
Legend

Do you mean, "Which have been running every hour for the past 24 hours"? or "Which I have been monitoring for at least 24 hours?"

sourcetype=Unix:ListeningPorts 
| table  _time, host, dest_port 
| bin _time span=1h 
| eventstats dc(_time) as OpHours, min(_time) as FirstHostEvent by host
| stats min(_time) as FirstPortEvent, max(OpHours) as OpHours, max(FirstHostEvent) as FirstHostEvent by host, dest_port
| table host, FirstHostEvent, OpHours, dest_port, FirstPortEvent
| sort 0 host dest_port
| where FirstHostEvent < relative_time(now(),"-1d") AND  FirstPortEvent > relative_time(now(),"-1h") 

The last comparison assuming an hourly scan.

0 Karma

DalJeanis
Legend

This strategy is going to throw an alert whenever the last use of a port scrolls off the time horizon that you are searching. Might want to merge in an inputcsv and send out an outputcsv with the known hosts and ports.

0 Karma

wweiland
Contributor

Ugh, good point. I'll have to think that through.

Thanks again!!!

0 Karma

wweiland
Contributor

I believe this approach would account for systems where applications are being decommissioned. Here i'm only looking for ports = 1 and the last recorded time was within the last hour from the end of the search. Does this look right?

|pivot SecOps__Listening_Ports Unix_Listening_Ports SPLITROW _time, SPLITROW host, SPLITROW dest_port | eventstats last(_time) AS maxtime | bucket span=1h _time | dedup host,dest_port,_time,maxtime | stats values(maxtime) AS maxtime last(_time) AS lasttime dc(_time) AS hostcount count as portcount by host,dest_port | where hostcount >= 24 AND portcount = 1 AND lasttime >= relative_time(maxtime, "-1h")

Sorry, also switched it to a pivot table.

0 Karma

somesoni2
Revered Legend

I believe you missed field OpHours and FirstHostEvent to be included in stats.

DalJeanis
Legend

crud, you're right.

Hey, I added them using eventstats, so they're scrolled off to the right in my head from all the other fields. 😉

0 Karma

wweiland
Contributor

Have been monitored for at least 24 hours.

0 Karma

wweiland
Contributor

I wanted to avoid false positives for systems being turned up in an active environment.

0 Karma

wweiland
Contributor

so simple!!

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...