Getting Data In

max_content_length error

eugenekogan
Explorer

Does anyone know the cause of this error message, and how to solve/prevent it?

Problem replicating config (bundle) to search peer 'servername:8089', error code '413' message from peer 'content exceeded max_content_length'
1 Solution

Ellen
Splunk Employee
Splunk Employee

The search peer's splunkd.log should be examined for the error message regarding the request for the bundle. You can determine how large the bundle (in bytes) is expected at that point in time and the default size such as the example below.

11-21-2011 23:27:37.349 ERROR HTTPServer - client_ip=1.234.5.678 sent request for method="POST /services/receivers/bundle/prd_searchhead01 HTTP/1.0" with content_length=937628312 greater than max_content_length=838860800

The content_length can be used as an approximate guide to reset the max_content_length. Though the bigger question is why is the bundle so large and can it be managed better to reduce its overall size? Are there excessively large lookup tables that can be trimmed? Are there replicationWhitelist or replicationBlacklist that can be defined to further reduce the bundle that needs to be replicated to all peers?

For further details refer to:

http://docs.splunk.com/Documentation/Splunk/4.2.4/Deploy/Configuredistributedsearch#Limit_knowledge_...
http://docs.splunk.com/Documentation/Splunk/4.2.4/Deploy/Whatisdistributedsearch#What_search_heads_s...

Deciding what is the reasonable maximum for the max_content_length is going to be dependent on that environment and how much memory is available without impacting the overall system as well as how you manage the search head knowledge objects to be replicated.

View solution in original post

Ellen
Splunk Employee
Splunk Employee

The search peer's splunkd.log should be examined for the error message regarding the request for the bundle. You can determine how large the bundle (in bytes) is expected at that point in time and the default size such as the example below.

11-21-2011 23:27:37.349 ERROR HTTPServer - client_ip=1.234.5.678 sent request for method="POST /services/receivers/bundle/prd_searchhead01 HTTP/1.0" with content_length=937628312 greater than max_content_length=838860800

The content_length can be used as an approximate guide to reset the max_content_length. Though the bigger question is why is the bundle so large and can it be managed better to reduce its overall size? Are there excessively large lookup tables that can be trimmed? Are there replicationWhitelist or replicationBlacklist that can be defined to further reduce the bundle that needs to be replicated to all peers?

For further details refer to:

http://docs.splunk.com/Documentation/Splunk/4.2.4/Deploy/Configuredistributedsearch#Limit_knowledge_...
http://docs.splunk.com/Documentation/Splunk/4.2.4/Deploy/Whatisdistributedsearch#What_search_heads_s...

Deciding what is the reasonable maximum for the max_content_length is going to be dependent on that environment and how much memory is available without impacting the overall system as well as how you manage the search head knowledge objects to be replicated.

View solution in original post

Ellen
Splunk Employee
Splunk Employee

In 4.2.4 there is a new attribute in the server.conf:
Its default setting in server.conf is 800MB

[httpServer]
# reject web accesses over 800MB in length
max_content_length = 838860800

One reason could be very large lookups where the search peers are no longer able to handle the replicated bundles as they hit the default max 838860800 bytes (eg. 800MB).

Try to increase this setting on your search peers.

Ellen
Splunk Employee
Splunk Employee

Replication occurs from the Search Head to Peer. So the replicationWhitelist or replicationBlackList are defined on the Search Head and its distsearch.conf. Do refer to the suggested Splunk documentation.

0 Karma

spolapragada
New Member

Thanks a lot. Increasing the limit on the peers solved the issue for now.
We will look into the other proposed solution to "replicationWhitelist or replicationBlacklist that can be defined to further reduce the bundle that needs to be replicated to all peers". Should this 'replicationWhitelist' change be done in the distsearch.conf file on the searchHead or the peers?
Thanks!

0 Karma

eugenekogan
Explorer

I saw that setting, but didn't realize it was new in 4.2.4. Do you know of a way to determine a reasonable size limit? Thanks!

0 Karma
Register for .conf21 Now! Go Vegas or Go Virtual!

How will you .conf21? You decide! Go in-person in Las Vegas, 10/18-10/21, or go online with .conf21 Virtual, 10/19-10/20.