Hi.
Got some great help using subsearches to match against a directory (CSV or SQL) using a sub search (https://community.splunk.com/t5/Splunk-Search/What-is-the-fastest-way-to-use-a-lookup-or-match-recor...), however, in some cases, it could be hundreds of thousands of records so I'm hitting this 10k limit (sub search limit).
So the question is
What is a good way to match records against a secondary source, say a lookup file?
I'm able to use lookup, of course, but feels like there might be a better way.
To use a specific example, I have an index that has a phone number.
I have a CSV file that puts those phone numbers in lists (like a directory).
If want 'all numbers in TEST directory' and I know I'll have more than 10k rows, then I do this:
index=myindex more-criteria-to-try-and-reduce
| lookup directory.csv number output list
| search list="TEST"
While this works, it obviously runs pretty long and so just asking if a better way!
Thank you!
Hi
you should look kvstore for this amount of data. Of course lookups are option, but it depends which one is better.
r. Ismo
Hi
you should look kvstore for this amount of data. Of course lookups are option, but it depends which one is better.
r. Ismo