Getting Data In

Efficient search?

a212830
Champion

Hi,

I have the following search, which is taking quite a while, and was wondering if there are any obvious improvements for it. It does parse a fair amount of events (1 million+). I'm trying to count unique high-level url's.

index=proxy sourcetype="leef" usrName!="-" 
| eval url=urldecode(url) 
| eval url=ltrim(url, "http://") 
| eval url=ltrim(url, "https://") 
| eval url=split(url, "/") 
| eval url=mvindex(url,0) 
| dedup src, dst 
| top limit=100 url
0 Karma
1 Solution

somesoni2
Revered Legend

Try this

index=proxy sourcetype="leef" usrName!="-" 
| fields src dst url
 | dedup src, dst 
 | eval url=urldecode(url) 
 | rex field=url "https*\:\/\/(?<url>[^\/]+)"
 | top limit=100 url

View solution in original post

somesoni2
Revered Legend

Try this

index=proxy sourcetype="leef" usrName!="-" 
| fields src dst url
 | dedup src, dst 
 | eval url=urldecode(url) 
 | rex field=url "https*\:\/\/(?<url>[^\/]+)"
 | top limit=100 url

a212830
Champion

Thanks!!!!

0 Karma
Get Updates on the Splunk Community!

Splunk Enterprise Security: Your Command Center for PCI DSS Compliance

Every security professional knows the drill. The PCI DSS audit is approaching, and suddenly everyone's asking ...

Developer Spotlight with Guilhem Marchand

From Splunk Engineer to Founder: The Journey Behind TrackMe    After spending over 12 years working full time ...

Cisco Catalyst Center Meets Splunk ITSI: From 'Payments Are Down' to Root Cause in ...

The Problem: When Networks and Services Don't Talk Payment systems fail at a retail location. Customers are ...