All Apps and Add-ons

Corrupted events using HTTPS and TCP (with SSL)

gary_byron
Observer

Has anyone had issues with the latest version of ta-protocol adapater corrupting the data that comes in?
We have two feeds, one a HTTPS setup receiving from Akamai and the other just a straight TCP feed (SSL enable)
The data for both of them seems to get corrupted, either the events get split, or truncated at various points.
Its not the Splunk limits as far as I can tell.

0 Karma

gary_byron
Observer

Sure, listed below. Thanks - I was also looking at the TCP buffer size also, but couldn't see what the default value was.
I had assumed it was just a number (in bytes)

[protocol://Akamai-Receiver]
bind_address = 0.0.0.0
client_auth_required = 0
index = prod_akamai
ip_version = v4
is_multicast = 0
output_type = stdout
port = 6710
protocol = http
set_broadcast = 0
set_multicast_loopback_mode = 0
sourcetype = waf:akamai:json
tcp_keepalive = 0
tcp_nodelay = 0
use_ssl = 1
keystore_pass = xxxx
keystore_path = /opt/splunk/etc/apps/IG_Certs/local/xxxx.jks
disabled = 0
server_verticle_instances = 2

0 Karma

Damien_Dallimor
Ultra Champion

Can you describe your setup configuration ? ie: the protocol:// stanza from inputs.conf would help.

Boosting your TCP receive buffer size may help , there is a field for this in the configuration.

0 Karma
Get Updates on the Splunk Community!

How I Instrumented a Rust Application Without Knowing Rust

As a technical writer, I often have to edit or create code snippets for Splunk's distributions of ...

Splunk Community Platform Survey

Hey Splunk Community, Starting today, the community platform may prompt you to participate in a survey. The ...

Observability Highlights | November 2022 Newsletter

 November 2022Observability CloudEnd Of Support Extension for SignalFx Smart AgentSplunk is extending the End ...