Splunk Search

Unrelated lookup table goes missing after setting [replicationBlacklist]

Path Finder

Hi all, we have a non-clustered distributed Splunk. It has a number of big lookup files that are updated regularly. As such, the config bundle became too big and I have to set [replicationBlacklist] in distsearch.conf, 1 entry per lookup file exclusion. We add local=t for lookup queries and things are fine.

However, we notice another unrelated lookup table starts having does not exist error from time to time, not all the time. Restarting the Splunk head helps but that's not ideal. It is used in an inputlookup query so there is no local=t option.

What is the recommended action here? Has anyone seen this before? Thanks,

0 Karma

Esteemed Legend

I assume that these lookups change rarely or never. If so, use your DS or configuration management tool to deploy the lookup files to the Indexers and for the love of efficiency, STOP using local=t. This will solve all of your problems.

0 Karma

Path Finder
0 Karma

Path Finder

Thanks @woodcock. Many of these lookups change every few hours, some every day. So they are changed quite frequently. The whole lookup folder is over 700MB, which is why config bundle distribution failed in the first place. Do you think using deployment server is still feasible for my situation?

0 Karma

Esteemed Legend

It depends on what creates the lookups. Let's assume that a splunk search creates the lookups. You create a cron process on your search head to export the lookups with your own script to the DS (or Cluster Master) and then the DS would cause your Indexers to update the app that has the lookups. Then the lookups will both always exist on your Indexers and always be up to day. When you use local=t, you completely destroy map/reduce efficiencies because all events come to the Search Head form that point on.

0 Karma

Path Finder

No the lookups are created by programs/scripts unrelated to Splunk. They create the lookup files and put them into Splunk's lookup folder

0 Karma