Archive
Highlighted

spath truncation

Explorer

We have a log format which contains a JSON payload. When we attempt to parse using spath, anything after a certain character limit is getting missed. We have tried setting:

[spath]
# number of characters to read from an XML or JSON event when auto extracting
extraction_cutoff = 10000
max_mem_usage_mb = 500

(up from 5000 and 200 in the defaults/limits.conf file respectively) but it made no difference. Is there some other setting that may be responsible?

Thanks, David.

Tags (1)
Highlighted

Re: spath truncation

Explorer

We added the settings in $SPLUNK_HOME/etc/system/local/limits.conf and then restarted the 3 Search Head Cluster instances.
The command:

$SPLUNK_HOME/bin/splunk btool limits list

shows:

[spath]
extract_all = true
extraction_cutoff = 10000
max_mem_usage_mb = 500

suggesting that the new settings will have been picked up by the restarts.

0 Karma
Highlighted

Re: spath truncation

Communicator

I think that amending limits.conf will sort your issue. Have you indexed new data since making the amendment to limits.conf? The new spath threshold will not be applied retroactively.

We had a very similar issue recently where some user AD profiles were upwards to 15k characters due to global group memberships. Raising the limit to 20k solved the problem, but we couldn't validate until new data had been indexed (daily pull).

EDIT: Have you used a tool such as http://www.charactercountonline.com/ to confirm that you are raising the limits to the correct level?

0 Karma
Highlighted

Re: spath truncation

SplunkTrust
SplunkTrust

1) Have you validated that the JSON is well formed, using any of the various web tools?

2) What is the number of characters it is cutting off at? Have you experimented to determine the precise cutoff?

3) Have you verified that the entire JSON has been ingested and is retained in your index?

0 Karma