All Apps and Add-ons

How to improve the performance of my search to find new opened ports

wweiland
Contributor

Hi,

I'm trying to find new ports that are opened up on a system where I have 24 hours of existing data.

sourcetype=Unix:ListeningPorts | join host [search sourcetype=Unix:ListeningPorts | bucket span=1h _time | dedup _time,host | stats count AS hostcount by host | table host,hostcount] | bucket span=1h _time | dedup host,dest_port,_time | stats values(hostcount) AS hostcount count as num_data_samples by host,dest_port | where hostcount = 24 AND num_data_samples = 1

The search takes few minutes to complete and i'm trying to get the time down to run as an alert. I believe the subsearch is causing the delay, but i'm not sure how to get the number of times a host reported in a 24 hour period. If I don't use the subsearch then I can't do the dedup to remove the multiple entries per hour (multiple dest_port) for a host.

Does anyone have any suggestions on how to make this better?

TIA

0 Karma
1 Solution

somesoni2
SplunkTrust
SplunkTrust

Give this a try

sourcetype=Unix:ListeningPorts | bucket span=1h _time | dedup host,dest_port,_time | stats dc(_time) AS hostcount count as num_data_samples by host,dest_port | where hostcount = 24 AND num_data_samples = 1

View solution in original post

somesoni2
SplunkTrust
SplunkTrust

Give this a try

sourcetype=Unix:ListeningPorts | bucket span=1h _time | dedup host,dest_port,_time | stats dc(_time) AS hostcount count as num_data_samples by host,dest_port | where hostcount = 24 AND num_data_samples = 1

tred23
Path Finder

This is an awesome idea!

0 Karma

DalJeanis
SplunkTrust
SplunkTrust

Yep, same conclusion I came to.

The inner query was not calculating a hostcount, but an hourcount for how many hours a host was active over the time period.

However, I'm not sure what the business meaning of "I have 24 hours of data " is.

0 Karma

DalJeanis
SplunkTrust
SplunkTrust

Do you mean, "Which have been running every hour for the past 24 hours"? or "Which I have been monitoring for at least 24 hours?"

sourcetype=Unix:ListeningPorts 
| table  _time, host, dest_port 
| bin _time span=1h 
| eventstats dc(_time) as OpHours, min(_time) as FirstHostEvent by host
| stats min(_time) as FirstPortEvent, max(OpHours) as OpHours, max(FirstHostEvent) as FirstHostEvent by host, dest_port
| table host, FirstHostEvent, OpHours, dest_port, FirstPortEvent
| sort 0 host dest_port
| where FirstHostEvent < relative_time(now(),"-1d") AND  FirstPortEvent > relative_time(now(),"-1h") 

The last comparison assuming an hourly scan.

0 Karma

DalJeanis
SplunkTrust
SplunkTrust

This strategy is going to throw an alert whenever the last use of a port scrolls off the time horizon that you are searching. Might want to merge in an inputcsv and send out an outputcsv with the known hosts and ports.

0 Karma

wweiland
Contributor

Ugh, good point. I'll have to think that through.

Thanks again!!!

0 Karma

wweiland
Contributor

I believe this approach would account for systems where applications are being decommissioned. Here i'm only looking for ports = 1 and the last recorded time was within the last hour from the end of the search. Does this look right?

|pivot SecOps__Listening_Ports Unix_Listening_Ports SPLITROW _time, SPLITROW host, SPLITROW dest_port | eventstats last(_time) AS maxtime | bucket span=1h _time | dedup host,dest_port,_time,maxtime | stats values(maxtime) AS maxtime last(_time) AS lasttime dc(_time) AS hostcount count as portcount by host,dest_port | where hostcount >= 24 AND portcount = 1 AND lasttime >= relative_time(maxtime, "-1h")

Sorry, also switched it to a pivot table.

0 Karma

somesoni2
SplunkTrust
SplunkTrust

I believe you missed field OpHours and FirstHostEvent to be included in stats.

DalJeanis
SplunkTrust
SplunkTrust

crud, you're right.

Hey, I added them using eventstats, so they're scrolled off to the right in my head from all the other fields. 😉

0 Karma

wweiland
Contributor

Have been monitored for at least 24 hours.

0 Karma

wweiland
Contributor

I wanted to avoid false positives for systems being turned up in an active environment.

0 Karma

wweiland
Contributor

so simple!!

0 Karma
Get Updates on the Splunk Community!

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...