Getting Data In

Large JSON events not showing field in "Interesting fields"

brent_weaver
Builder

Good morning all! I have a datasource that is valid JSON (I verified with python and jq). The entire event gets ingested, however a field that is at the tail end of the raw event does not show up in interesting fields. Splunk is parsing it correctly because if I look at the event, the key and values have the necessary color code indicating that they are KV.

I would say that my even has roughly 26k c chars in it and it is less than 1mb. I looked in limits.conf and found nothing valuable.

Any help is much appreciated

Labels (1)
0 Karma
1 Solution

ragedsparrow
Contributor

So, even if the _raw is not truncated, there is still a limit set in limits.conf for KV.

[kv]
maxchars = <integer>
* Truncate _raw to this size and then do auto KV.
* Default: 10240 characters

What is I believe is happening in your case is that the _raw is being truncated before auto KV, so those fields at the end are not being extracted,

Maybe you can increase that and try again?

Source: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf#.5Bkv.5D

View solution in original post

0 Karma

ragedsparrow
Contributor

So, even if the _raw is not truncated, there is still a limit set in limits.conf for KV.

[kv]
maxchars = <integer>
* Truncate _raw to this size and then do auto KV.
* Default: 10240 characters

What is I believe is happening in your case is that the _raw is being truncated before auto KV, so those fields at the end are not being extracted,

Maybe you can increase that and try again?

Source: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf#.5Bkv.5D

0 Karma

brent_weaver
Builder

THANK YOU for the great response, this totally makes sense. What, if any, are the adverse affects of this change? I assume probably more stress on RAM?

Again, thank you for taking time to help me out here.

0 Karma

ragedsparrow
Contributor

I do think you would see a memory impact with an increase in the maxchars, so you'd want to weigh that out and possibly do some testing if you have the capability to. Right now, I have a use case that has made it necessary for me to increase it to 40,000 characters, we're doing some testing right now to see what adverse effects this may cause.

0 Karma

brent_weaver
Builder

Hey there, turns out that I need to also make my setting to around 40k. What have your findings been thus far? Seems like a huge jump going 4x the default value.

0 Karma

ragedsparrow
Contributor

We ended up backing down to 20k after testing. We had a discussion with our users and determined that beyond 20k wasn't needed in their use case for field extraction. We haven't seen a performance impact in testing, however it's not real indicative of production load.

0 Karma

brent_weaver
Builder

Awesome. I will follow suit in testing like you are. I have it set to 15360 (1.5x) the default vault and will keep an eye out but in my case we need it to be 40k+ to work.

0 Karma

rahulg
Explorer

hi i am facing same issue here, just clarification this kv setting needs to change on heavy forwarder or search heads

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Are you using Verbose Mode?

---
If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...