Getting Data In

Efficient search?

a212830
Champion

Hi,

I have the following search, which is taking quite a while, and was wondering if there are any obvious improvements for it. It does parse a fair amount of events (1 million+). I'm trying to count unique high-level url's.

index=proxy sourcetype="leef" usrName!="-" 
| eval url=urldecode(url) 
| eval url=ltrim(url, "http://") 
| eval url=ltrim(url, "https://") 
| eval url=split(url, "/") 
| eval url=mvindex(url,0) 
| dedup src, dst 
| top limit=100 url
0 Karma
1 Solution

somesoni2
Revered Legend

Try this

index=proxy sourcetype="leef" usrName!="-" 
| fields src dst url
 | dedup src, dst 
 | eval url=urldecode(url) 
 | rex field=url "https*\:\/\/(?<url>[^\/]+)"
 | top limit=100 url

View solution in original post

somesoni2
Revered Legend

Try this

index=proxy sourcetype="leef" usrName!="-" 
| fields src dst url
 | dedup src, dst 
 | eval url=urldecode(url) 
 | rex field=url "https*\:\/\/(?<url>[^\/]+)"
 | top limit=100 url

a212830
Champion

Thanks!!!!

0 Karma
Get Updates on the Splunk Community!

Observability Newsletter Highlights | March 2023

 March 2023 | Check out the latest and greatestSplunk APM's New Tag Filter ExperienceSplunk APM has updated ...

Security Newsletter Updates | March 2023

 March 2023 | Check out the latest and greatestUnify Your Security Operations with Splunk Mission Control The ...

Platform Newsletter Highlights | March 2023

 March 2023 | Check out the latest and greatestIntroducing Splunk Edge Processor, simplified data ...