Deployment Architecture

ERROR DistributedBundleReplicationManager - got non-200 response from peer.



Currently, i have upgraded splunk from 6.0.4 to 6.1.1 in our test box.
Till then, i am able too the follwoig error log in splunkd.log

ERROR DistributedBundleReplicationManager - got non-200 response from peer.uri=****,
reply="HTTP/1.1 400 Bad Request" response_code=400

Could someone help to clarify and resolve the above?


Tags (1)

Splunk Employee
Splunk Employee

This happens when the search-head is pushing a search bundle that is too large to the indexers.

The default bundle max size (maxBundleSize) is 1GB
and the default http packet size (max_content_length) accepted by splunkd is 800MB 😞

Therefore :

  • when 1024MB> bundle >800MB see an http error from the indexers. "failed_because_BUNDLE_DATA_TRANSMIT_FAILURE" or "ERROR DistributedBundleReplicationManager - got non-200 response from peer"
  • when the bundle is >1024MB we see a different error, from the search-head.

Workarounds :

  • RECOMMENDED :reduce the bundle size (trim your lookups, use blacklists in distsearch.conf)
  • LESS RECOMMENDED : allow larger bundles

example : to bump the bundle size to 2GB max
on Indexers , edit server.conf (push from cluster master etc/master-apps in a cluster)

max_content_length = 2147483648 
# in bytes => 2GBdistsearch.conf 

on Search-head

maxBundleSize= 2097152 
# in MB => 2GB


I got these on old hardware when I upgraded to 6.1.3. It appears to be a timing issue and storage speed appears to play a role. Take a look at this thread.

0 Karma
Get Updates on the Splunk Community!

Splunk Lantern | Spotlight on Security: Adoption Motions, War Stories, and More

Splunk Lantern is a customer success center that provides advice from Splunk experts on valuable data ...

Splunk Cloud | Empowering Splunk Administrators with Admin Config Service (ACS)

Greetings, Splunk Cloud Admins and Splunk enthusiasts! The Admin Configuration Service (ACS) team is excited ...

Tech Talk | One Log to Rule Them All

One log to rule them all: how you can centralize your troubleshooting with Splunk logs We know how important ...