TRUNCATE is set to 0 => so no truncation there, still events are truncated after 3969 signs => 4 KB on disk
Splunk runs on one system, RabbitMQ on another system, events are ingested via STDOUT
We already checked that the events are correctly transferred via RabbitMQ so there must be something in Splunk and/or the add-on. Maybe this is a problem in the Python wrapper or Java? Or are there additional settings for STDOUT?
Thanks in advance for your replies!
Here are the steps:
TRUNCATE=0 in props.conf for your sourcetype (if you have overridden/rewritten sourcetype, you MUST USE THE ORIGINAL VALUE).
Deploy this to all Heavy Forwarders and Indexers.
Restart all Splunk instances there.
Send new data in.
Test changes with a search using
All time on the
index_earliest=-5m in your search SPL to ensure that you are doing a valid test search.
Might be unrelated, I had a similar issue with a large JSON input doing this even with Truncate set to 0
It was resolved by setting the "Response Handler" in the input to "JSONArrayHandler", this was an API input however so unsure if it relates to yours... but I thought I'd share just in case
No error in the logs
ack_messages = 1
hec_batch_mode = 0
hec_https = 0
hostname = amqp-server
index_message_envelope = 0
index_message_propertys = 0
output_type = stdout
password = xxx
port = 5672
queue_name = amqp-queue
sourcetype = alerts
use_ssl = 0
username = xxxx
index = index
disabled = 0
exchange_name = alerts
hec_token = yyyyyyyyyyyyyyyyyyyy