Have a nice day, everyone!
For continuous event truncation tracking, I have a simple alert that notifies me about truncation:
index="_internal" sourcetype="splunkd" log_level="WARN" "Truncating"
| rex field=_raw "limit\s+of\s+(?<Limit>\d+)\s"
| rex field=_raw "length\s+>=\s+(?<MessageLength>\d+)\s+"
| rex field=_raw "data_host=\"(?<Host>.*?)\",\s"
| rex field=_raw "data_sourcetype=\"(?<Sourcetype>.*?)\""
| stats count by Sourcetype, Host, Limit, MessageLength
| top limit=5 Sourcetype, MessageLength
| table Sourcetype, MessageLengthAfter my last update, I have started to notice that the 'splunk_python' sourcetype hits the default TRUNCATE value a couple of times per hour.
Overall, the events look fine, so I decided to tune the TRUNCATE value on the indexer cluster layer (manager-apps\_cluster\local\props.conf):
[splunk_python]
SHOULD_LINEMERGE = false
TRUNCATE = 20000Unfortunately, it also didn't help, and I'm still getting alerts with the default message length (10000). Then I set TRUNCATE to 0 and found no difference. What am I doing wrong?
Make sure you're setting the TRUNCATE value on the Host that is reporting the truncation. Be sure that Splunk instance is restarted after making the change.
Make sure you're setting the TRUNCATE value on the Host that is reporting the truncation. Be sure that Splunk instance is restarted after making the change.
Your point is right.
I have also thought that truncation can appear only on IDXC layer but it were my SHCs.