Splunk Search

Why Does Search Head Cluster Replication Fail with large lookup table

faol
Explorer

Replication is failing with the following error.

07-12-2015 21:08:45.859 +0000 WARN  ConfReplicationThread - Error pushing configurations to captain=https://server_name:8089, consecutiveErrors=1: Error in acceptPush, uploading lookup_table_file="/opt/splunk/etc/apps/SA-EndpointProtection/lookups/localprocesses_tracker.csv": Non-200 status_code=413: Content-Length of 1567337721 too large (maximum is 838860800)

Is there a way to allow the replication to occur even though the file is too large?

0 Karma
1 Solution

bpaul_splunk
Splunk Employee
Splunk Employee

As stated, the file exceeds max_content_length in server.conf of 800 MB. This can be increased by adding the following to $SPLUNK_HOME/etc/system/local/server.conf.

[httpServer]
max_content_length = 1600000000

This could negatively affect performance however so it would be preferable to reduce the size of the file if at all possible.

View solution in original post

bpaul_splunk
Splunk Employee
Splunk Employee

As stated, the file exceeds max_content_length in server.conf of 800 MB. This can be increased by adding the following to $SPLUNK_HOME/etc/system/local/server.conf.

[httpServer]
max_content_length = 1600000000

This could negatively affect performance however so it would be preferable to reduce the size of the file if at all possible.

lloydknight
Builder

Hello,

what are the possible impacts in doubling the max_content_length ?

0 Karma

bpaul_splunk
Splunk Employee
Splunk Employee

The reason for the limit is to prevent excessive memory consumption. In the newer versions of Splunk, this value has been increased to 2GB. This is the information from the server.conf spec file.

max_content_length =
* Measured in bytes
* HTTP requests over this size will rejected.
* Exists to avoid allocating an unreasonable amount of memory from web
requests
* Defaulted to 2147483648 or 2GB
* In environments where indexers have enormous amounts of RAM, this
number can be reasonably increased to handle large quantities of
bundle data.

0 Karma

masonmorales
Influencer

Large lookups like this should ideally be converted to a KV store. That way, MongoDB can do the replication independently of the search bundle.

Get Updates on the Splunk Community!

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...

Adoption of Infrastructure Monitoring at Splunk

  Splunk's Growth Engineering team showcases one of their first Splunk product adoption-Splunk Infrastructure ...

Modern way of developing distributed application using OTel

Recently, I had the opportunity to work on a complex microservice using Spring boot and Quarkus to develop a ...