Installation

How to remove/disable files from knowledge bundle or increase maxBundleSize in distsearch.conf?

tokio13
Path Finder

Hello,

I'm experiencing the following issue on one of my search heads (total of 3):

Knowledge bundle size=2608MB exceeds max limit=2000MB. Distributed searches are running against an outdated knowledge bundle. Please remove/disable files from knowledge bundle or increase maxBundleSize in distsearch.conf.

 

Why is the SH behaving like this when the others have the same config?

 

Labels (1)
0 Karma
1 Solution

tokio13
Path Finder

I was able to solve the problem by going to the /opt/splunk/etc/apps/search/lookups/

and removing a .csv that was exported by an old search query (output) and was actually not needed. 

 

Thanks every one once again!

View solution in original post

0 Karma

tokio13
Path Finder

I was able to solve the problem by going to the /opt/splunk/etc/apps/search/lookups/

and removing a .csv that was exported by an old search query (output) and was actually not needed. 

 

Thanks every one once again!

0 Karma

tshah-splunk
Splunk Employee
Splunk Employee

Hey @tokio13,

Can you run the btool command to check what value is configured for the maxBundleSize parameter on one of your SHC members?

$SPLUNK_HOME/bin/splunk btool distsearch list --debug | grep maxBundleSize

If this returns you value less than 2000, consider having the value of the parameter updated to a higher value. than the limit.

---
If you find the answer helpful, an upvote/karma is appreciated

somesoni2
Revered Legend

Are you using search head clustering for those 3 SH of yours? It could be a local artifact on that SH which is causing knowledge bundle to be large from that SH.

Using instructions from below to check the details about knowledge bundle in troubling SH and compare it with SH that is not having this issue.

https://docs.splunk.com/Documentation/Splunk/8.2.4/DistSearch/Troubleshootknowledgebundlereplication...

tokio13
Path Finder

I followed the documentation that you mentioned and everything looks the same on all my three Search Heads. 

0 Karma

tokio13
Path Finder

Unfortunately I was unable to resolve my issue with the mentioned answers. I'm still working on this but I appreciate you suggestions.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @tokio13,

what's the problem: you cannot access distsearch.conf or what else?

Could you share more infos?

I already solved the same problem in one of our customers.

Ciao.

Giuseppe

 

0 Karma

tokio13
Path Finder

I have access to distsearch.conf on all of my search heads (3) .

In this environment Cluster Master instance acts as Deployer. And the deployer acts like CM, they sit on the same instance.  (+3IDX) 

I get the [ Knowledge bundle size=2608MB exceeds max limit=2000MB. Distributed searches are running against an outdated knowledge bundle. Please remove/disable files from knowledge bundle or increase maxBundleSize in distsearch.conf ] notification on the capitan of the search head cluster.

This affects my searches: 

  • Expected common latest bundle version on all peers after sync replication, found none. Reverting to old behavior - using most recent bundles on all

 

 

Tags (1)
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @tokio13 ,

if you can access the distsearch.conf, why cannot you use my solution?

I used it few days ago in a project with Splunk Professional Services.

Ciao.

Giuseppe

gcusello
SplunkTrust
SplunkTrust

Hi @tokio13,

as @isoutamo said, probably you have very large lookups that are sent from SHs to the Indexers.

As you can see at https://docs.splunk.com/Documentation/Splunk/8.2.4/Admin/Distsearchconf , you have to blacklist some (or all) of them in distsearch.conf

[replicationBlacklist]
blacklist1 = lookup1
blacklist2 = lookup2

Ciao.

Giuseppe

isoutamo
SplunkTrust
SplunkTrust

Probably you have big lookups on your sh which it try to send to IDX layer? If you have SHC then captain is that node which try to send those search bundles to IDXs. You could check this from MC (Search - distributed  search).

You could exclude those lookups by size or name from bundle. Then you must use in lookup command that it will be executed on sh layer when you are using those lookups.

You could found many questions an answers from community about that issue. 
Like this https://community.splunk.com/t5/Splunk-Search/Large-lookup-caused-the-bundle-replication-to-fail-Wha...

r. Ismo 

Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...