Splunk Search

Lookup table performance question

jambajuice
Communicator

I am experimenting with some searches that will need to do lookups on some fairly big tables (30 MB or more). I'm wondering whether it will be faster for Splunk to do a single lookup on a really large table or if I should just chain together several lookups on smaller tables. And I'm curious how big a table can get before it should be broken down into smaller sequential lookups (if ever).

Thx.

Tags (1)

araitz
Splunk Employee
Splunk Employee

There aren't any absolutes, but:

  • if you have a large lookup but only need to reference discrete parts of the lookup in discrete, non-overlapping searches, then it would be fine to break them into several smaller lookups
  • if you have several lookups and you intend to frequently reference more than one of them at a time (either manually via the search language or automatically via props.conf), you would be best served to combine them in to one larger lookup.
  • 30 MB lookups are not too big by any stretch, but if they change constantly and/or you have a large distributed environment, you might start to experience poor performance without some tuning
  • when a given lookup table gets rather large (more than a few hundred MB), the best practice is to move to external lookups (usually leveraging a script that queries a database or binary-tree) rather than to split the lookups into several smaller lookups
Get Updates on the Splunk Community!

Data Management Digest – December 2025

Welcome to the December edition of Data Management Digest! As we continue our journey of data innovation, the ...

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...