This is not a question, rather I am sharing something that I discovered with a Splunk OnDemand support call.
I thought I was a bit of a Splunk pro, but just goes to show there's always something to learn and this one was so simple it's a little embarrassing 😳
Imagine you have a very large lookup with 5 million+ rows (in my case a KV store, containing an extract from Maxmind DB with some other internal references added).
If you want to do a CIDR match on this, you set up a lookup definition with Match Type "CIDR(network)". Now run your query (IP address obfuscated in example)
| makeresults
| eval ip4 = "127.0.0.1"
| lookup maxmind network AS ip4
For me on Splunk Cloud, this takes around 50 seconds, and I contacted Splunk Support for some assistance in making this viable (50 seconds just way too high).
I already understood that the issue is that the cidrmatch has to run on every single row - I added a pre filter to my lookup definition and proved that with the pre filter it ran much much faster, but obviously that limited it's use to just the filtered rows.
I tried messing around with inputlookup, but couldn't get it any better
| inputlookup maxmind
| where country_name="Japan" city_name="Osaka"
This still took the same ~50 seconds to run
Of course if I had of read the documentation properly, I would have seen that "where" is actually an argument of the inputlookup clause. Changing this to:
| inputlookup maxmind where country_name="Japan" city_name="Osaka"
Made all the difference - this now runs in 5-7 seconds as the WHERE clause is now running the same as if you had added the pre-filter to the lookup definition.
To use this in a search to enrich other data, you can use:
| makeresults
| eval ip4 = "127.0.0.1"
| appendcols
[| inputlookup maxmind where country_name="Japan" city_name="Osaka"]
| where cidrmatch(network, ip4)
Obviously, the tighter you can get your WHERE clause the faster this runs - you can also use accelerated fields in your lookup (if using KV store) to further enhance, this will depend entirely on your data and how you can filter it down to the smallest possible data set before continuing.
For me, using the same 5 million row KV store and maxmind data as my example
| makeresults
| eval ip4 = "127.0.0.1"
| appendcols
[| inputlookup maxmind where country_name="Japan" city_name="Osaka" postal_code="541-0051" network_cidr="127.0.*"]
| where cidrmatch(network, ip4)
runs in ~0.179 seconds [using actual IP address, not the fake one above]. Your mileage may vary, but I hope this helps someone else trying to figure this out.
I haven't tried the same with a CSV lookup, but I imagine it would be very similar.
... View more