Hi @trashyroadz
Have opened a new thread for the issue I am facing.
Current Splunk version - 8.2.3.3
While running a query in search page, getting error as "Unable to distribute to peer named <idx name> at uri <splunk idx uri> because replication was unsuccessful. ReplicationStatus: Failed-Failure info: failed_because_BUNDLE_SIZE_RETRIEVAL_FAILURE". Verify connectivity to the search peer, that the search peer is up, and that an adequate level of system resources are available.
Did not think of connectivity issue as in the message box got message saying bundle size exceeds limit.
On checking, could see all apps in $SPLUNK_HOME/var/run/splunk/deploy in deployer even if we had changed a single file in a single app. As per my understanding only modified apps should be pushed to SHs and from SH captain to search peers.
Please help on this. Let me know if any other detail is needed.
Hi @trashyroadz
@richgalloway
Thank you for your inputs. I think I got confused between search bundle and deployer bundle. So, I think I need to check the bundle in /opt/splunk/var/run/searchpeers in search head. I guess I can also whitelist/blacklist the items that needed to be sent to indexer in the distsearch.conf of search head(captain). Please correct me if it is wrong.
There is another issue that we are facing because of which I thought both are related and I started checking the deployer bundle.
When we are executing a bundle push command from deployer, it is always taking more than 30 mins to complete and sometimes not getting the successful message/error message. There are many apps and I dont think increasing maxBundleSize will help in reducing the time taken for bundle push. Can you please suggest on this as well.
Agreed, generally it is recommended to instead blacklist items in your search head apps that don't need to be sent to the indexers, as this will keep your bundle sizes down. A few additional suggestions:
maxBundleSize = <int>
* The maximum size (in MB) of the bundle for which replication can occur. If the bundle is larger than this bundle replication will not occur and an error message will be logged.
* Defaults to: 2048 (2GB)
tar xvf <file>.bundle
find -type f -exec du -h {} \; | grep .csv
[lookup]
max_memtable_bytes = 2*<size of the largest lookup>
example:
On all indexers in limits.conf set:
(for unclustered indexers: $SPLUNK_HOME/etc/system/local/limits.conf; for clustered indexers, use the _cluster app on the Cluster Manager or if you distribute indexes.conf and other indexer settings in a custom app, place it in there)
[lookup]
max_memtable_bytes = 135000000
#equals to 135MB, where the largest lookup file was 120MB
Hi
here is link to old post for your replication issue https://community.splunk.com/t5/Splunk-Search/Large-lookup-caused-the-bundle-replication-to-fail-Wha... there are some more if you need.
Can you describe your SHC environment, so we can better understand your issue (OS, versions, size, apps, lookups etc.)?
r. Ismo
"search peer" is another term for "indexer" and deployers never send bundles to indexers. The bundle in this case is a search bundle, sent from SH to indexers. The search bundle contains every KO the indexers might need to complete the search.
The most common (IME) cause of bundle size problems is very large or too many lookup files. Make sure lookup files don't grow indefinitely by periodically removing unneeded data. Files over 1GB should be removed from the bundle and either sent to indexers out-of-band or used only on the SH (using local=true).