Hi guys, Happy New Year,
i do some code testing with the Splunk HEC, now i need to transfer some large volum data with gzip compressed.
1. first i find one limit in $SPLUNK_HOME$/etc/system/defau...
See more...
Hi guys, Happy New Year,
i do some code testing with the Splunk HEC, now i need to transfer some large volum data with gzip compressed.
1. first i find one limit in $SPLUNK_HOME$/etc/system/default/limits.conf
[http_input]
max_content_length = <integer>
* The maximum length, in bytes, of HTTP request content that is
accepted by the HTTP Event Collector server.
* Default: 838860800 (~ 800 MB)
but i It is found that this value seems to calculate the size after decompression,
because i have one test file about 50MiB, it's far less than 800MB, but when i sending request,
Splunk raise the:
<!doctype html><html><head><meta http-equiv="content-type" content="text/html; charset=UTF-8"><title>413 Content-Length of 838889996 too large (maximum is 838860800)</title></head><body><h1>Content-Length of 838889996 too large (maximum is 838860800)</h1><p>The request your client sent was too large.</p></body></html>
2. the 2nd limit i find at $SPLUNK_HOME$/etc/apps/splunk_httpinput/local/inputs.conf
[http]
maxEventSize = <positive integer>[KB|MB|GB]
* The maximum size of a single HEC (HTTP Event Collector) event.
* HEC disregards and triggers a parsing error for events whose size is
greater than 'maxEventSize'.
* Default: 5MB
i think this limit is set at only one event size? if i send batch events in one request by "/services/collector", so this limit will apply to every event in the batch events, right?
Are there any relevant experts to help confirm this behavior? if need more details feel free to let me know, Many thanks!