Splunk Search

Lookup filtering performance on very large lookups

lindonmorris
Explorer

This is not a question, rather I am sharing something that I discovered with a Splunk OnDemand support call.

I thought I was a bit of a Splunk pro, but just goes to show there's always something to learn and this one was so simple it's a little embarrassing 😳

 

Imagine you have a very large lookup with 5 million+ rows (in my case a KV store, containing an extract from Maxmind DB with some other internal references added).

If you want to do a CIDR match on this, you set up a lookup definition with Match Type "CIDR(network)".

Now run your query (IP address obfuscated in example)

 

 

| makeresults 
| eval ip4 = "127.0.0.1"
| lookup maxmind network AS ip4

 

 

For me on Splunk Cloud, this takes around 50 seconds, and I contacted Splunk Support for some assistance in making this viable (50 seconds just way too high).

 

I already understood that the issue is that the cidrmatch has to run on every single row - I added a pre filter to my lookup definition and proved that with the pre filter it ran much much faster, but obviously that limited it's use to just the filtered rows.

 

I tried messing around with inputlookup, but couldn't get it any better

 

 

| inputlookup maxmind 
| where country_name="Japan" city_name="Osaka"

 

 

This still took the same ~50 seconds to run

 

Of course if I had of read the documentation properly, I would have seen that "where" is actually an argument of the inputlookup clause. Changing this to:

 

 

| inputlookup maxmind where country_name="Japan" city_name="Osaka"

 

 

Made all the difference - this now runs in 5-7 seconds as the WHERE clause is now running the same as if you had added the pre-filter to the lookup definition.

 

To use this in a search to enrich other data, you can use:

 

 

| makeresults 
| eval ip4 = "127.0.0.1"
| appendcols 
    [| inputlookup maxmind where country_name="Japan" city_name="Osaka"]
| where cidrmatch(network, ip4)

 

 

Obviously, the tighter you can get your WHERE clause the faster this runs - you can also use accelerated fields in your lookup (if using KV store) to further enhance, this will depend entirely on your data and how you can filter it down to the smallest possible data set before continuing.

 

For me, using the same 5 million row KV store and maxmind data as my example

 

 

| makeresults 
| eval ip4 = "127.0.0.1"
| appendcols 
    [| inputlookup maxmind where country_name="Japan" city_name="Osaka" postal_code="541-0051" network_cidr="127.0.*"]
| where cidrmatch(network, ip4)

 

 

runs in ~0.179 seconds [using actual IP address, not the fake one above]. Your mileage may vary, but I hope this helps someone else trying to figure this out.

I haven't tried the same with a CSV lookup, but I imagine it would be very similar.

Labels (1)

bowesmana
SplunkTrust
SplunkTrust

Nice observations.

It would be interesting to know, if you are running a clustered index environment, whether a CSV lookup, that can be distributed to the indexers would improve the run time if the lookup runs on the distributed indexers rather than all data running on the SH where the KV store is.

You are right that the 'where' clause as part of inputlookup is often overlooked, but I have also found it does make a difference, because it filters the data going into the pipeline.

 

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...