<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Not able to extract Nested JSON in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551987#M91612</link>
    <description>&lt;P&gt;Possibly the "." but more likely the "-". Try putting the &amp;lt;&amp;lt;FIELD&amp;gt;&amp;gt; names in single quotes.&lt;/P&gt;</description>
    <pubDate>Tue, 18 May 2021 08:25:59 GMT</pubDate>
    <dc:creator>ITWhisperer</dc:creator>
    <dc:date>2021-05-18T08:25:59Z</dc:date>
    <item>
      <title>Not able to extract Nested JSON</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551737#M91571</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I have an event that is an entire JSON. It looks something like this.&amp;nbsp;&lt;/P&gt;&lt;P&gt;{&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; Key1 : {&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; KEY2: VAL2&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; KEY3: VAL3&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ....&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;},&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; KeyX&amp;nbsp; : {&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; KEY2: VAL2&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; KEY3: VAL3&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ....&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;},&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; KeyY : {&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; KEY2: VAL2&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; KEY3: VAL3&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ....&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;}&lt;/P&gt;&lt;P&gt;}&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Here Key1 and KeyX and KeyY are unknown to me, meaning they can change all the time. I would get around 100 such sub-dictionaries. I just was the sub-dictionary inside, as separate Splunk events.&lt;/P&gt;&lt;P&gt;{&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; KEY2: VAL2&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; KEY3: VAL3&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ....&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;}&lt;/P&gt;&lt;P&gt;I have tried a lot of different search queries using spath, but nothing seems to help.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Could someone please help me with this. I very much appreciate it.&lt;/P&gt;</description>
      <pubDate>Sun, 16 May 2021 08:21:00 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551737#M91571</guid>
      <dc:creator>surejsajeev</dc:creator>
      <dc:date>2021-05-16T08:21:00Z</dc:date>
    </item>
    <item>
      <title>Re: Not able to extract Nested JSON</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551740#M91572</link>
      <description>&lt;P&gt;There might be an easier way to do this, but you could extract all the fields with spath, then reconstruct the fields and values, removing the "unknown" part of the fieldname&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| makeresults 
| eval _raw="{
	\"KeyA\": {
		\"Key2\": 3,
		\"Key3\": 3
	},
	\"KeyE\": {
		\"Key2\": 5,
		\"Key3\": 4
	},
	\"KeyC\": {
		\"Key2\": 7,
		\"Key3\": 5
	},
	\"KeyH\": {
		\"Key2\": 9,
		\"Key3\": 6
	},
	\"KeyR\": {
		\"Key2\": 1,
		\"Key3\": 7
	}
}|{
	\"KeyA\": {
		\"Key2\": 1,
		\"Key3\": 3
	},
	\"KeyE\": {
		\"Key2\": 2,
		\"Key3\": 4
	},
	\"KeyC\": {
		\"Key2\": 3,
		\"Key3\": 5
	},
	\"KeyH\": {
		\"Key2\": 4,
		\"Key3\": 6
	},
	\"KeyR\": {
		\"Key2\": 5,
		\"Key3\": 7
	}
}"
| eval events=split(_raw,"|") 
| mvexpand events
| fields - _*
| rename events as _raw


| spath
| fields - _raw
| foreach *.*
    [| eval Group_&amp;lt;&amp;lt;MATCHSEG1&amp;gt;&amp;gt;=if(isnull(Group_&amp;lt;&amp;lt;MATCHSEG1&amp;gt;&amp;gt;),"&amp;lt;&amp;lt;MATCHSEG2&amp;gt;&amp;gt;:".'&amp;lt;&amp;lt;FIELD&amp;gt;&amp;gt;',mvappend(Group_&amp;lt;&amp;lt;MATCHSEG1&amp;gt;&amp;gt;,"&amp;lt;&amp;lt;MATCHSEG2&amp;gt;&amp;gt;:".'&amp;lt;&amp;lt;FIELD&amp;gt;&amp;gt;'))]
| fields *_*
| foreach *_*
    [| eval events=if(isnull(events),mvjoin(&amp;lt;&amp;lt;FIELD&amp;gt;&amp;gt;,"|"),mvappend(events,mvjoin(&amp;lt;&amp;lt;FIELD&amp;gt;&amp;gt;,"|")))]
| fields events
| mvexpand events
| streamstats count as row 
| eval events=split(events,"|")
| mvexpand events
| eval key=mvindex(split(events,":"),0)
| eval value=mvindex(split(events,":"),1)
| eval {key}=value
| fields - key value events
| stats values(*) as * by row
| fields - row&lt;/LI-CODE&gt;&lt;P&gt;This assume a two level nesting in JSON as shown in your example. More complex JSON formats might not work so well.&lt;/P&gt;&lt;P&gt;Another way (perhaps simpler) would be to replace all the "unknown" values with known values&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| spath
| foreach *.*
    [| eval unknown=if(isnull(unknown),"&amp;lt;&amp;lt;MATCHSEG1&amp;gt;&amp;gt;",mvdedup(mvappend(unknown,"&amp;lt;&amp;lt;MATCHSEG1&amp;gt;&amp;gt;")))]
| fields unknown
| mvexpand unknown
| eval _raw=replace(_raw,"\"".unknown."\"","\"known\"")
| spath path=known&lt;/LI-CODE&gt;</description>
      <pubDate>Sun, 16 May 2021 11:27:44 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551740#M91572</guid>
      <dc:creator>ITWhisperer</dc:creator>
      <dc:date>2021-05-16T11:27:44Z</dc:date>
    </item>
    <item>
      <title>Re: Not able to extract Nested JSON</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551766#M91576</link>
      <description>&lt;P&gt;Thank you very much &lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/225168"&gt;@ITWhisperer&lt;/a&gt;&amp;nbsp; for the solution. The query works perfectly. However when I run it, I get this error message from splunk&amp;nbsp;&lt;/P&gt;&lt;P&gt;"The search you ran returned a number of fields that exceeded the current indexed field extraction limit. To ensure that all fields are extracted for search, set limits.conf: [kv] / indexed_kv_limit to a number that is higher than the number of fields contained in the files that you index."&lt;/P&gt;&lt;P&gt;Could you advise on how I can resolve this issue, please? I am not sure of the no of fields that my query will generate. Any dynamic limit that I can see?&lt;/P&gt;&lt;P&gt;Your help is much appreciated.&lt;/P&gt;</description>
      <pubDate>Mon, 17 May 2021 01:28:20 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551766#M91576</guid>
      <dc:creator>surejsajeev</dc:creator>
      <dc:date>2021-05-17T01:28:20Z</dc:date>
    </item>
    <item>
      <title>Re: Not able to extract Nested JSON</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551950#M91601</link>
      <description>&lt;P&gt;Which of the two approaches gives you this error? Either way, you (or your splunk admin) should increase the limit of fields available in the configuration.&lt;/P&gt;</description>
      <pubDate>Tue, 18 May 2021 06:00:08 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551950#M91601</guid>
      <dc:creator>ITWhisperer</dc:creator>
      <dc:date>2021-05-18T06:00:08Z</dc:date>
    </item>
    <item>
      <title>Re: Not able to extract Nested JSON</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551951#M91602</link>
      <description>&lt;P&gt;it was the second one&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/225168"&gt;@ITWhisperer&lt;/a&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 18 May 2021 06:01:50 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551951#M91602</guid>
      <dc:creator>surejsajeev</dc:creator>
      <dc:date>2021-05-18T06:01:50Z</dc:date>
    </item>
    <item>
      <title>Re: Not able to extract Nested JSON</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551953#M91603</link>
      <description>&lt;P&gt;You could try removing the unknown field once it has been used&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| spath
| foreach *.*
    [| eval unknown=if(isnull(unknown),"&amp;lt;&amp;lt;MATCHSEG1&amp;gt;&amp;gt;",mvdedup(mvappend(unknown,"&amp;lt;&amp;lt;MATCHSEG1&amp;gt;&amp;gt;")))]
| fields unknown
| mvexpand unknown
| eval _raw=replace(_raw,"\"".unknown."\"","\"known\"")
| fields - unknown
| spath path=known&lt;/LI-CODE&gt;&lt;P&gt;Does the job inspector inform you how far through the search it got before it ran out of fields?&lt;/P&gt;</description>
      <pubDate>Tue, 18 May 2021 06:08:23 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551953#M91603</guid>
      <dc:creator>ITWhisperer</dc:creator>
      <dc:date>2021-05-18T06:08:23Z</dc:date>
    </item>
    <item>
      <title>Re: Not able to extract Nested JSON</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551957#M91604</link>
      <description>&lt;P&gt;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/225168"&gt;@ITWhisperer&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;These are the message i see from the job inspector.&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Forcefully terminated search process with sid=&amp;lt;SID&amp;gt; since its physical memory usage (34997.320000 MB) has exceeded the physical memory threshold specified in limits.conf/search_process_memory_usage_threshold (30000.000000 MB)&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;-----------------------------------------------------------&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;command.mvexpand: output will be truncated at 11700 results due to excessive memory usage. Memory threshold of 500MB as configured in limits.conf / [mvexpand] / max_mem_usage_mb has been reached.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;-----------------------------------------------------------&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Forcefully terminated search process with sid=&amp;lt;SID&amp;gt; since its physical memory usage (34997.320000 MB) has exceeded the physical memory threshold specified in limits.conf/search_process_memory_usage_threshold (30000.000000 MB)&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;-----------------------------------------------------------&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;This is the query that I am using&amp;nbsp;&lt;/P&gt;&lt;P&gt;| spath&lt;BR /&gt;| foreach *.*&lt;BR /&gt;[| eval unknown=if(isnull(unknown),"&amp;lt;&amp;lt;MATCHSEG1&amp;gt;&amp;gt;",mvdedup(mvappend(unknown,"&amp;lt;&amp;lt;MATCHSEG1&amp;gt;&amp;gt;")))]&lt;BR /&gt;| fields unknown&lt;BR /&gt;| mvexpand unknown&lt;BR /&gt;| eval _raw=replace(_raw,"\"".unknown."\"","\"known\"")&lt;BR /&gt;| fields - unknown&lt;BR /&gt;| spath path=known| spath input=known | table COLUMN1, COLUMN2, COLUMN3, ....COLUNM24 | where match(TIMESTAMP,".") AND like(COLUMN1, "%")&lt;/P&gt;</description>
      <pubDate>Tue, 18 May 2021 06:24:25 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551957#M91604</guid>
      <dc:creator>surejsajeev</dc:creator>
      <dc:date>2021-05-18T06:24:25Z</dc:date>
    </item>
    <item>
      <title>Re: Not able to extract Nested JSON</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551959#M91605</link>
      <description>&lt;P&gt;mvexpand has a memory limit - the first option might be more efficient in terms of memory usage since it removes _raw as soon as it is no longer required and similarly for other fields created along the way. If changing the configuration still isn't enough, then have a look at my post on ways around mvexpand limits&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.splunk.com/t5/Splunk-Search/mvexpand-limits/m-p/549178" target="_blank"&gt;https://community.splunk.com/t5/Splunk-Search/mvexpand-limits/m-p/549178&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 18 May 2021 06:34:02 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551959#M91605</guid>
      <dc:creator>ITWhisperer</dc:creator>
      <dc:date>2021-05-18T06:34:02Z</dc:date>
    </item>
    <item>
      <title>Re: Not able to extract Nested JSON</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551980#M91610</link>
      <description>&lt;P&gt;Thank you&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/225168"&gt;@ITWhisperer&lt;/a&gt;&lt;/P&gt;&lt;P&gt;I tried the first option in my query and it did not work (The reason why I used the second option). the error message from the first was this&lt;/P&gt;&lt;P&gt;Failed to parse templatized search for field 'ABC_DEB_SJW_AK_SIE_122_EL-3602110-134:checksumawq:wwrikahs.AEUUR&lt;/P&gt;&lt;P&gt;This is my query&lt;/P&gt;&lt;P&gt;base search&amp;nbsp;| spath&lt;BR /&gt;| fields - _raw&lt;BR /&gt;| foreach *.*&lt;BR /&gt;[| eval Group_&amp;lt;&amp;lt;MATCHSEG1&amp;gt;&amp;gt;=if(isnull(Group_&amp;lt;&amp;lt;MATCHSEG1&amp;gt;&amp;gt;),"&amp;lt;&amp;lt;MATCHSEG2&amp;gt;&amp;gt;:".'&amp;lt;&amp;lt;FIELD&amp;gt;&amp;gt;',mvappend(Group_&amp;lt;&amp;lt;MATCHSEG1&amp;gt;&amp;gt;,"&amp;lt;&amp;lt;MATCHSEG2&amp;gt;&amp;gt;:".'&amp;lt;&amp;lt;FIELD&amp;gt;&amp;gt;'))]&lt;BR /&gt;| fields *_*&lt;BR /&gt;| foreach *_*&lt;BR /&gt;[| eval events=if(isnull(events),mvjoin(&amp;lt;&amp;lt;FIELD&amp;gt;&amp;gt;,"|"),mvappend(events,mvjoin(&amp;lt;&amp;lt;FIELD&amp;gt;&amp;gt;,"|")))]&lt;BR /&gt;| fields events&lt;BR /&gt;| mvexpand events&lt;BR /&gt;| streamstats count as row&lt;BR /&gt;| eval events=split(events,"|")&lt;BR /&gt;| mvexpand events&lt;BR /&gt;| eval key=mvindex(split(events,":"),0)&lt;BR /&gt;| eval value=mvindex(split(events,":"),1)&lt;BR /&gt;| eval {key}=value&lt;BR /&gt;| fields - key value events&lt;BR /&gt;| stats values(*) as * by row&lt;BR /&gt;| fields - row&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any idea on why this is? is it because of the "." ?&lt;/P&gt;</description>
      <pubDate>Tue, 18 May 2021 07:44:21 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551980#M91610</guid>
      <dc:creator>surejsajeev</dc:creator>
      <dc:date>2021-05-18T07:44:21Z</dc:date>
    </item>
    <item>
      <title>Re: Not able to extract Nested JSON</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551987#M91612</link>
      <description>&lt;P&gt;Possibly the "." but more likely the "-". Try putting the &amp;lt;&amp;lt;FIELD&amp;gt;&amp;gt; names in single quotes.&lt;/P&gt;</description>
      <pubDate>Tue, 18 May 2021 08:25:59 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551987#M91612</guid>
      <dc:creator>ITWhisperer</dc:creator>
      <dc:date>2021-05-18T08:25:59Z</dc:date>
    </item>
    <item>
      <title>Re: Not able to extract Nested JSON</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551989#M91613</link>
      <description>&lt;P&gt;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/225168"&gt;@ITWhisperer&lt;/a&gt;&lt;/P&gt;&lt;P&gt;Changed the query to&amp;nbsp;&lt;/P&gt;&lt;P&gt;index=edennet-self-organising-app-s sourcetype="_json" | spath&lt;BR /&gt;| fields - _raw&lt;BR /&gt;| foreach *.*&lt;BR /&gt;[| eval Group_&amp;lt;&amp;lt;MATCHSEG1&amp;gt;&amp;gt;=if(isnull(Group_&amp;lt;&amp;lt;MATCHSEG1&amp;gt;&amp;gt;),"&amp;lt;&amp;lt;MATCHSEG2&amp;gt;&amp;gt;:".'&amp;lt;&amp;lt;FIELD&amp;gt;&amp;gt;',mvappend(Group_&amp;lt;&amp;lt;MATCHSEG1&amp;gt;&amp;gt;,"&amp;lt;&amp;lt;MATCHSEG2&amp;gt;&amp;gt;:".'&amp;lt;&amp;lt;FIELD&amp;gt;&amp;gt;'))]&lt;BR /&gt;| fields *_*&lt;BR /&gt;| foreach *_*&lt;BR /&gt;[| eval events=if(isnull(events),mvjoin('&amp;lt;&amp;lt;FIELD&amp;gt;&amp;gt;',"|"),mvappend(events,mvjoin('&amp;lt;&amp;lt;FIELD&amp;gt;&amp;gt;',"|")))]&lt;BR /&gt;| fields events&lt;BR /&gt;| mvexpand events&lt;BR /&gt;| streamstats count as row&lt;BR /&gt;| eval events=split(events,"|")&lt;BR /&gt;| mvexpand events&lt;BR /&gt;| eval key=mvindex(split(events,":"),0)&lt;BR /&gt;| eval value=mvindex(split(events,":"),1)&lt;BR /&gt;| eval {key}=value&lt;BR /&gt;| fields - key value events&lt;BR /&gt;| stats values(*) as * by row&lt;BR /&gt;| fields - row&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;but still got the same error. The error occurs for strings without the "-" too.&amp;nbsp; the strings consist of characters such as "," , "=", "_" and ":".&lt;/P&gt;</description>
      <pubDate>Tue, 18 May 2021 08:31:39 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551989#M91613</guid>
      <dc:creator>surejsajeev</dc:creator>
      <dc:date>2021-05-18T08:31:39Z</dc:date>
    </item>
    <item>
      <title>Re: Not able to extract Nested JSON</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551991#M91614</link>
      <description>&lt;P&gt;Perhaps you could share a more realistic anonymised example of the JSON you are dealing with as the initial made-up example does not appear to be representative enough. It often helps to have a more complete picture to work from.&lt;/P&gt;</description>
      <pubDate>Tue, 18 May 2021 08:49:24 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551991#M91614</guid>
      <dc:creator>ITWhisperer</dc:creator>
      <dc:date>2021-05-18T08:49:24Z</dc:date>
    </item>
    <item>
      <title>Re: Not able to extract Nested JSON</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551997#M91615</link>
      <description>&lt;P&gt;sure&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/225168"&gt;@ITWhisperer&lt;/a&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This is how the JSON will look like.&lt;/P&gt;&lt;P&gt;{&lt;BR /&gt;"source_dict=source_name,source_dict=location,context=location_code,manage=1,function=1,data1=1,data2=12:source create": {&lt;BR /&gt;"COMMENT": "None",&lt;BR /&gt;"MODULE_DATA": "source_dict=source_name,source_dict=location,context=location_code,manage=1,function=1,data1=1,data2=12",&lt;BR /&gt;"MOD_1": "source_inst",&lt;BR /&gt;"MOD_2": "name_1",&lt;BR /&gt;"MOD_3": "sub_inst",&lt;BR /&gt;"MOD_4": "execute",&lt;BR /&gt;"MOD_5": "None",&lt;BR /&gt;"MOD_6": "XER1",&lt;BR /&gt;"MOD_7": "None",&lt;BR /&gt;"MOD_8": "source create",&lt;BR /&gt;"MOD_9": "A1",&lt;BR /&gt;"MOD_10": "None",&lt;BR /&gt;"MOD_11": "None",&lt;BR /&gt;"MOD_12": "None",&lt;BR /&gt;"TIMESTAMP": "test_time"&lt;BR /&gt;},&lt;BR /&gt;"source_dict=source_name,source_dict=location,context=location_code,manage=1,function=1,data1=1,data2=12:source change:param1": {&lt;BR /&gt;"COMMENT": "None",&lt;BR /&gt;"MODULE_DATA": "source_dict=source_name,source_dict=location,context=location_code,manage=1,function=1,data1=1,data2=12",&lt;BR /&gt;"MOD_1": "source_inst",&lt;BR /&gt;"MOD_2": "name_1",&lt;BR /&gt;"MOD_3": "sub_inst",&lt;BR /&gt;"MOD_4": "execute",&lt;BR /&gt;"MOD_5": "12",&lt;BR /&gt;"MOD_6": "XER1",&lt;BR /&gt;"MOD_7": "None",&lt;BR /&gt;"MOD_8": "source change",&lt;BR /&gt;"MOD_9": "A1",&lt;BR /&gt;"MOD_10": "param1",&lt;BR /&gt;"MOD_11": "None",&lt;BR /&gt;"MOD_12": "None",&lt;BR /&gt;"TIMESTAMP": "test_time"&lt;BR /&gt;},&lt;BR /&gt;"source_dict=source_name,source_dict=location,context=location_code,manage=1,function=1,data1=1,data2=12:source change:temo1aaa": {&lt;BR /&gt;"COMMENT": "None",&lt;BR /&gt;"MODULE_DATA": "source_dict=source_name,source_dict=location,context=location_code,manage=1,function=1,data1=1,data2=12",&lt;BR /&gt;"MOD_1": "source_inst",&lt;BR /&gt;"MOD_2": "name_1",&lt;BR /&gt;"MOD_3": "sub_inst",&lt;BR /&gt;"MOD_4": "execute",&lt;BR /&gt;"MOD_5": "-1231321",&lt;BR /&gt;"MOD_6": "XER1",&lt;BR /&gt;"MOD_7": "None",&lt;BR /&gt;"MOD_8": "source change",&lt;BR /&gt;"MOD_9": "A1",&lt;BR /&gt;"MOD_10": "temo1aaa",&lt;BR /&gt;"MOD_11": "None",&lt;BR /&gt;"MOD_12": "None",&lt;BR /&gt;"TIMESTAMP": "test_time"&lt;BR /&gt;}&lt;BR /&gt;}&lt;/P&gt;&lt;P&gt;This entire dictionary is considered as one single event in Splunk.&amp;nbsp;&lt;/P&gt;&lt;P&gt;I wanted to parse this dictionary and extract the second-level dictionary. i.e. this part alone(see below), as an individual Splunk event, so that I can use spath on it and make it in the form of a table.&lt;/P&gt;&lt;P&gt;{&lt;BR /&gt;"COMMENT": "None",&lt;BR /&gt;"MODULE_DATA": "source_dict=source_name,source_dict=location,context=location_code,manage=1,function=1,data1=1,data2=12",&lt;BR /&gt;"MOD_1": "source_inst",&lt;BR /&gt;"MOD_2": "name_1",&lt;BR /&gt;"MOD_3": "sub_inst",&lt;BR /&gt;"MOD_4": "execute",&lt;BR /&gt;"MOD_5": "None",&lt;BR /&gt;"MOD_6": "XER1",&lt;BR /&gt;"MOD_7": "None",&lt;BR /&gt;"MOD_8": "source create",&lt;BR /&gt;"MOD_9": "A1",&lt;BR /&gt;"MOD_10": "None",&lt;BR /&gt;"MOD_11": "None",&lt;BR /&gt;"MOD_12": "None",&lt;BR /&gt;"TIMESTAMP": "test_time"&lt;BR /&gt;}&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 18 May 2021 09:05:25 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/551997#M91615</guid>
      <dc:creator>surejsajeev</dc:creator>
      <dc:date>2021-05-18T09:05:25Z</dc:date>
    </item>
    <item>
      <title>Re: Not able to extract Nested JSON</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/552021#M91617</link>
      <description>&lt;P&gt;You may still have problems with mvexpand by how about this to deal with unusual names:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| makeresults 
| eval _raw="{
\"source_dict=source_name,source_dict=location,context=location_code,manage=1,function=1,data1=1,data2=12:source create\": {
\"COMMENT\": \"None\",
\"MODULE_DATA\": \"source_dict=source_name,source_dict=location,context=location_code,manage=1,function=1,data1=1,data2=12\",
\"MOD_1\": \"source_inst\",
\"MOD_2\": \"name_1\",
\"MOD_3\": \"sub_inst\",
\"MOD_4\": \"execute\",
\"MOD_5\": \"None\",
\"MOD_6\": \"XER1\",
\"MOD_7\": \"None\",
\"MOD_8\": \"source create\",
\"MOD_9\": \"A1\",
\"MOD_10\": \"None\",
\"MOD_11\": \"None\",
\"MOD_12\": \"None\",
\"TIMESTAMP\": \"test_time\"
},
\"source_dict=source_name,source_dict=location,context=location_code,manage=1,function=1,data1=1,data2=12:source change:param1\": {
\"COMMENT\": \"None\",
\"MODULE_DATA\": \"source_dict=source_name,source_dict=location,context=location_code,manage=1,function=1,data1=1,data2=12\",
\"MOD_1\": \"source_inst\",
\"MOD_2\": \"name_1\",
\"MOD_3\": \"sub_inst\",
\"MOD_4\": \"execute\",
\"MOD_5\": \"12\",
\"MOD_6\": \"XER1\",
\"MOD_7\": \"None\",
\"MOD_8\": \"source change\",
\"MOD_9\": \"A1\",
\"MOD_10\": \"param1\",
\"MOD_11\": \"None\",
\"MOD_12\": \"None\",
\"TIMESTAMP\": \"test_time\"
},
\"source_dict=source_name,source_dict=location,context=location_code,manage=1,function=1,data1=1,data2=12:source change:temo1aaa\": {
\"COMMENT\": \"None\",
\"MODULE_DATA\": \"source_dict=source_name,source_dict=location,context=location_code,manage=1,function=1,data1=1,data2=12\",
\"MOD_1\": \"source_inst\",
\"MOD_2\": \"name_1\",
\"MOD_3\": \"sub_inst\",
\"MOD_4\": \"execute\",
\"MOD_5\": \"-1231321\",
\"MOD_6\": \"XER1\",
\"MOD_7\": \"None\",
\"MOD_8\": \"source change\",
\"MOD_9\": \"A1\",
\"MOD_10\": \"temo1aaa\",
\"MOD_11\": \"None\",
\"MOD_12\": \"None\",
\"TIMESTAMP\": \"test_time\"
}
}"
| spath
| transpose 0
| eval dict=if(column="_time" OR column="_raw",column,mvindex(split(column,"."),0,-2))
| dedup dict
| eventstats values(dict) as dict
| eval dict=mvfilter(dict!="_raw" AND dict!="_time")
| eval "row 1"=if(column="_time" OR column="_raw",'row 1',dict)
| eval column=if(column="_time" OR column="_raw",column,"dict")
| dedup column
| fields - dict
| transpose 0 header_field=column
| fields - column
| mvexpand dict
| eval _raw=replace(_raw,"\"".dict."\"","\"known\"")
| spath path=known&lt;/LI-CODE&gt;</description>
      <pubDate>Tue, 18 May 2021 11:03:44 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/552021#M91617</guid>
      <dc:creator>ITWhisperer</dc:creator>
      <dc:date>2021-05-18T11:03:44Z</dc:date>
    </item>
    <item>
      <title>Re: Not able to extract Nested JSON</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/552027#M91618</link>
      <description>&lt;P&gt;Here's another approach which only uses mvexpand to create one extra row, the other rows are created with makecontinuous and populated with filldown.&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| spath
| foreach *.*
    [| eval dict=if(isnull(dict),"&amp;lt;&amp;lt;MATCHSEG1&amp;gt;&amp;gt;",mvappend(dict,"&amp;lt;&amp;lt;MATCHSEG1&amp;gt;&amp;gt;"))]
| fields dict
| eval dict=mvdedup(dict)
| eval rows=mvcount(dict) 
| eval rows=mvappend(rows,"1")
| mvexpand rows
| rename _time as time, _raw as raw
| makecontinuous rows
| filldown *
| eval raw=replace(raw,"\"".mvindex(dict,rows-1)."\"","\"known\"")
| spath input=raw path=known&lt;/LI-CODE&gt;&lt;P&gt;If that still runs out of memory in mvexpand, there is another way to create the extra row.&lt;/P&gt;</description>
      <pubDate>Tue, 18 May 2021 11:52:15 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/552027#M91618</guid>
      <dc:creator>ITWhisperer</dc:creator>
      <dc:date>2021-05-18T11:52:15Z</dc:date>
    </item>
    <item>
      <title>Re: Not able to extract Nested JSON</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/552169#M91633</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/225168"&gt;@ITWhisperer&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;I tried both the queries that you have mentioned and both of them showed the same errors as before. Is there any other available option?.&lt;/P&gt;&lt;P&gt;I can increase the limit in limits.conf file.&lt;/P&gt;&lt;P&gt;Currently, this is my limits.conf.&amp;nbsp;&lt;/P&gt;&lt;P&gt;[thruput]&lt;BR /&gt;maxKBps = 0&lt;/P&gt;&lt;P&gt;[search]&lt;BR /&gt;enable_memory_tracker=true&lt;BR /&gt;search_process_memory_usage_threshold=4000&lt;/P&gt;&lt;P&gt;would this solve my issue?.&amp;nbsp; The error message was&amp;nbsp;&lt;SPAN&gt;&amp;nbsp;"since its physical memory usage (30891.121000 MB) has exceeded the physical memory threshold specified in limits.conf/search_process_memory_usage_threshold (30000.000000 MB)."&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I am very thankful to all the help you have been doing :).&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 19 May 2021 09:57:10 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/552169#M91633</guid>
      <dc:creator>surejsajeev</dc:creator>
      <dc:date>2021-05-19T09:57:10Z</dc:date>
    </item>
    <item>
      <title>Re: Not able to extract Nested JSON</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/552172#M91634</link>
      <description>&lt;P&gt;To be honest, I don't know - someone else does that stuff for me &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt; - having said that, a quick glance through the docs and I found this stanza which you might want to try too&lt;/P&gt;&lt;P&gt;&lt;SPAN class="mw-headline"&gt;[mvexpand]&lt;/SPAN&gt;&lt;/P&gt;&lt;PRE&gt;* This stanza allows for fine tuning of mvexpand search command.

max_mem_usage_mb = &amp;lt;non-negative integer&amp;gt;
* Overrides the default value for "max_mem_usage_mb".
* Limits the amount of RAM, in megabytes (MB), a batch of events or results will
  use in the memory of a search process.
* See definition in the [default] stanza for "max_mem_usage_mb"
  for more details.
* Default: 500&lt;/PRE&gt;</description>
      <pubDate>Wed, 19 May 2021 10:09:07 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Not-able-to-extract-Nested-JSON/m-p/552172#M91634</guid>
      <dc:creator>ITWhisperer</dc:creator>
      <dc:date>2021-05-19T10:09:07Z</dc:date>
    </item>
  </channel>
</rss>

