Currently our cluster environment, reports errors with lookups associated with the size "The current bundle directory contains a large lookup file that might cause bundle replication fail". Is there official information of the recommended maximum size for these files? I know that by history, using these files and updating them is a bad practice since this generates a lot of traffic and even more if there are many lookups that are constantly updated, generating a new bundle and replication in all the cluster
currently we have many lookups up to 1.2GB in size, of which some are updated 1 time a day and others every 5 minutes.
Of the improvements we make is to add them to a blacklist, but we must validate in certain cases that these have not been declared as automatic lookups since the cluster reports that the indexers can not build the lookups.
Updating look-ups that big in KVStore will have it's own issues. KVStore does not compress and you have to enlarge OPLOG limit if in a SHC. Especially if updating so fast. That also does not help auto lookups as sharing to indexers is still in csv. Lookups this big should only be used in searches that are well factored to reduce results so they can occur on the search head and do lookup enrichment as very last step in the search.