Good morning all! I have a datasource that is valid JSON (I verified with python and jq). The entire event gets ingested, however a field that is at the tail end of the raw event does not show up in interesting fields. Splunk is parsing it correctly because if I look at the event, the key and values have the necessary color code indicating that they are KV.
I would say that my even has roughly 26k c chars in it and it is less than 1mb. I looked in limits.conf and found nothing valuable.
Any help is much appreciated
So, even if the _raw is not truncated, there is still a limit set in limits.conf for KV.
[kv]
maxchars = <integer>
* Truncate _raw to this size and then do auto KV.
* Default: 10240 characters
What is I believe is happening in your case is that the _raw is being truncated before auto KV, so those fields at the end are not being extracted,
Maybe you can increase that and try again?
Source: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf#.5Bkv.5D
So, even if the _raw is not truncated, there is still a limit set in limits.conf for KV.
[kv]
maxchars = <integer>
* Truncate _raw to this size and then do auto KV.
* Default: 10240 characters
What is I believe is happening in your case is that the _raw is being truncated before auto KV, so those fields at the end are not being extracted,
Maybe you can increase that and try again?
Source: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf#.5Bkv.5D
THANK YOU for the great response, this totally makes sense. What, if any, are the adverse affects of this change? I assume probably more stress on RAM?
Again, thank you for taking time to help me out here.
I do think you would see a memory impact with an increase in the maxchars, so you'd want to weigh that out and possibly do some testing if you have the capability to. Right now, I have a use case that has made it necessary for me to increase it to 40,000 characters, we're doing some testing right now to see what adverse effects this may cause.
Hey there, turns out that I need to also make my setting to around 40k. What have your findings been thus far? Seems like a huge jump going 4x the default value.
We ended up backing down to 20k after testing. We had a discussion with our users and determined that beyond 20k wasn't needed in their use case for field extraction. We haven't seen a performance impact in testing, however it's not real indicative of production load.
Awesome. I will follow suit in testing like you are. I have it set to 15360 (1.5x) the default vault and will keep an eye out but in my case we need it to be 40k+ to work.
hi i am facing same issue here, just clarification this kv setting needs to change on heavy forwarder or search heads
Are you using Verbose Mode?