I am using Splunk Cloud and I have defined a sourcetype (from the UI) of category Structured and Indexed Extractions as json.
For most JSON logs published to my Splunk Cloud instance for the given sourcetype, all fields are correctly extracted. The exception to this are some larger JSONs, for which only a few of the fields are correctly extracted.
After reading some other questions, it seems there are some limits either in spath (extraction_cutoff) itself or in the auto-kv extraction (maxchars).
All these solutions require to modify limits.conf and here come my question:
- How do you configure this kind of limits in Splunk Cloud?
- Is there any other way to properly extract all fields from a big JSON in Splunk Cloud?
I have applied MAX_EVENTS 40000 for the the _json source type, big fields become searchable, but field name was not extracted. I tried to add maxchars. It did not help as well. Is there any way how to make Splunk Cloud extract big fields (above 20K)?