Getting Data In

Efficient search?

a212830
Champion

Hi,

I have the following search, which is taking quite a while, and was wondering if there are any obvious improvements for it. It does parse a fair amount of events (1 million+). I'm trying to count unique high-level url's.

index=proxy sourcetype="leef" usrName!="-" 
| eval url=urldecode(url) 
| eval url=ltrim(url, "http://") 
| eval url=ltrim(url, "https://") 
| eval url=split(url, "/") 
| eval url=mvindex(url,0) 
| dedup src, dst 
| top limit=100 url
0 Karma
1 Solution

somesoni2
Revered Legend

Try this

index=proxy sourcetype="leef" usrName!="-" 
| fields src dst url
 | dedup src, dst 
 | eval url=urldecode(url) 
 | rex field=url "https*\:\/\/(?<url>[^\/]+)"
 | top limit=100 url

View solution in original post

somesoni2
Revered Legend

Try this

index=proxy sourcetype="leef" usrName!="-" 
| fields src dst url
 | dedup src, dst 
 | eval url=urldecode(url) 
 | rex field=url "https*\:\/\/(?<url>[^\/]+)"
 | top limit=100 url

a212830
Champion

Thanks!!!!

0 Karma
Get Updates on the Splunk Community!

Splunk MCP & Agentic AI: Machine Data Without Limits

  Discover how the Splunk Model Context Protocol (MCP) Server can revolutionize the way your organization ...

Finding Based Detections General Availability

Overview  We’ve come a long way, folks, but here in Enterprise Security 8.4 I’m happy to announce Finding ...

Get Your Hands Dirty (and Your Shoes Comfy): The Splunk Experience

Hands-On Learning and Technical Seminars  Sometimes, you just need to see the code. For those looking for a ...