<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: How to extract multiple JSON array? in Splunk Search</title>
    <link>https://community.splunk.com/t5/Splunk-Search/How-to-extract-multiple-JSON-array/m-p/680891#M232700</link>
    <description>&lt;P&gt;I don't see a correlationID in your sample data. &amp;nbsp;Is it a root node in JSON or is it in contents as well? &amp;nbsp;I will assume a root JSON. (See my new emulation below.)&lt;/P&gt;&lt;P&gt;Importantly, I am speculating that you want to group all values of these three fields by correlationID. &amp;nbsp;Is this the requirement? &amp;nbsp;I will assume yes. &amp;nbsp;Normally, data people will want to go the path in my previous comment because you don't want to mix-and-match BatchId, RequestID, and Status. &amp;nbsp;Do you care whether the order are mixed up? &amp;nbsp;The result display you illustrated doesn't answer this question.&lt;/P&gt;&lt;P&gt;I will assume that you do care about order of appearance. (But I do want to warn you that three ordered list of &amp;gt; 100 values are not good for users. &amp;nbsp;I, for one, would hate to look at such a table.) &amp;nbsp;If so, you must answer yet more important questions: Do your data contain identical triplets (BatchId, RequestID, Status) with any given correlationID? &amp;nbsp;If they do, do you care to preserve all triplets? &amp;nbsp;If you want to preserve all triplets, do you care about the order of events that carry them? &amp;nbsp;Or do you want to filter out all duplicates? &amp;nbsp;If you want to remove duplicate triplets, do you care about the order of events that carry them? &amp;nbsp;If you care about the order, what are the criteria to order them?&lt;/P&gt;&lt;P&gt;See, volunteers in this board have no intimate knowledge about your dataset or your use case. &amp;nbsp;Illustrating a subset of data in text is an excellent start. &amp;nbsp;But you still need to define the problem to the extent another person who don't have your knowledge to sieve through your data and arrive at the same conclusion as you would get - all without SPL. &amp;nbsp;If the other person is to read your mind, 9 times out of 10 the mindreader will be wrong.&lt;/P&gt;&lt;P&gt;Below I give an emulation that includes a correlationID as root node:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| makeresults
| eval _raw = "{\"content\" : {
    \"List of Batches Processed\" : [ {
      \"P_REQUEST_ID\" : \"177\",
      \"P_BATCH_ID\" : \"1\",
      \"P_TEMPLATE\" : \"Template\",
      \"P_PERIOD\" : \"24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }, {
      \"P_REQUEST_ID\" : \"1r7\",
      \"P_BATCH_ID\" : \"2\",
      \"P_TEMPLATE\" : \"Template\",
      \"P_PERIOD\" : \"24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }, {
      \"P_REQUEST_ID\" : \"1577\",
      \"P_BATCH_ID\" : \"3\",
      \"P_TEMPLATE\" : \"Template\",
      \"P_PERIOD\" : \"24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }, {
      \"P_REQUEST_ID\" : \"16577\",
      \"P_BATCH_ID\" : \"4\",
      \"P_TEMPLATE\" : \"Template\",
      \"P_PERIOD\" : \"24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }]
  },
  \"correlationID\": \"125dfe5\"
}"
| spath
``` data emulation above ```&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 15 Mar 2024 23:42:35 GMT</pubDate>
    <dc:creator>yuanliu</dc:creator>
    <dc:date>2024-03-15T23:42:35Z</dc:date>
    <item>
      <title>How to extract multiple JSON array?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-extract-multiple-JSON-array/m-p/680786#M232679</link>
      <description>&lt;P&gt;Thanks in Advance.&lt;/P&gt;
&lt;P&gt;1.I have a json object as "content.List of Batches Processed{}" and Already splunk extract field as "content.List of Batches Processed{}.BatchID" and count it showing as 26 .But in the&amp;nbsp;"content.List of Batches Processed{}.BatchID" we have 134 records. So i want to extract the multiple JSON values as field.From below logs i want to extract all the values from P_REQUEST_ID,P_BATCH_ID,P_TEMPLATE&lt;/P&gt;
&lt;P&gt;Query i tried to fetch the data&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;| eval BatchID=spath("content.List of Batches Processed{}*", "content.List of Batches Processed{}.P_BATCH_ID"),Request=spath(_raw, "content.List of Batches Processed{}.P_REQUEST_ID")|table BatchID Request&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;"content" : {
    "List of Batches Processed" : [ {
      "P_REQUEST_ID" : "177",
      "P_BATCH_ID" : "1",
      "P_TEMPLATE" : "Template",
      "P_PERIOD" : "24",
      "P_MORE_BATCHES_EXISTS" : "Y",
      "P_ZUORA_FILE_NAME" : "Template20240306102852.csv",
      "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0",
      "P_RETURN_STATUS" : "SUCCESS"
    }, {
      "P_REQUEST_ID" : "1r7",
      "P_BATCH_ID" : "2",
      "P_TEMPLATE" : "Template",
      "P_PERIOD" : "24",
      "P_MORE_BATCHES_EXISTS" : "Y",
      "P_ZUORA_FILE_NAME" : "Template20240306102852.csv",
      "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0",
      "P_RETURN_STATUS" : "SUCCESS"
    }, {
      "P_REQUEST_ID" : "1577",
      "P_BATCH_ID" : "3",
      "P_TEMPLATE" : "Template",
      "P_PERIOD" : "24",
      "P_MORE_BATCHES_EXISTS" : "Y",
      "P_ZUORA_FILE_NAME" : "Template20240306102852.csv",
      "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0",
      "P_RETURN_STATUS" : "SUCCESS"
    }, {
      "P_REQUEST_ID" : "16577",
      "P_BATCH_ID" : "4",
      "P_TEMPLATE" : "Template",
      "P_PERIOD" : "24",
      "P_MORE_BATCHES_EXISTS" : "Y",
      "P_ZUORA_FILE_NAME" : "Template20240306102852.csv",
      "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0",
      "P_RETURN_STATUS" : "SUCCESS"
    }&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 16 Mar 2024 08:12:31 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-extract-multiple-JSON-array/m-p/680786#M232679</guid>
      <dc:creator>karthi2809</dc:creator>
      <dc:date>2024-03-16T08:12:31Z</dc:date>
    </item>
    <item>
      <title>Re: How to extract multiple JSON array?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-extract-multiple-JSON-array/m-p/680798#M232680</link>
      <description>&lt;P&gt;Use path parameter/argument in &lt;A href="https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Spath" target="_blank" rel="noopener"&gt;spath&lt;/A&gt; to lock in a JSON array.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| spath path=content."List of Batches Processed"{}
| mvexpand content."List of Batches Processed"{}
| spath input=content."List of Batches Processed"{}
| fields - _* content.*&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Note your sample data is non-compliant. &amp;nbsp;Correcting for syntax, it should give&lt;/P&gt;&lt;TABLE&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD width="104.53125px" height="25px"&gt;P_BATCH_ID&lt;/TD&gt;&lt;TD width="102.703125px" height="25px"&gt;P_MESSAGE&lt;/TD&gt;&lt;TD width="40px" height="25px"&gt;P_MORE_BATCHES_EXISTS&lt;/TD&gt;&lt;TD width="86.078125px" height="25px"&gt;P_PERIOD&lt;/TD&gt;&lt;TD width="124.296875px" height="25px"&gt;P_REQUEST_ID&lt;/TD&gt;&lt;TD width="157.625px" height="25px"&gt;P_RETURN_STATUS&lt;/TD&gt;&lt;TD width="108.390625px" height="25px"&gt;P_TEMPLATE&lt;/TD&gt;&lt;TD width="240.234375px" height="25px"&gt;P_ZUORA_FILE_NAME&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="104.53125px" height="135px"&gt;1&lt;/TD&gt;&lt;TD width="102.703125px" height="135px"&gt;Data loaded in RevPro Successfully - Success: 10000 Failed: 0&lt;/TD&gt;&lt;TD width="40px" height="135px"&gt;Y&lt;/TD&gt;&lt;TD width="86.078125px" height="135px"&gt;24&lt;/TD&gt;&lt;TD width="124.296875px" height="135px"&gt;177&lt;/TD&gt;&lt;TD width="157.625px" height="135px"&gt;SUCCESS&lt;/TD&gt;&lt;TD width="108.390625px" height="135px"&gt;Template&lt;/TD&gt;&lt;TD width="240.234375px" height="135px"&gt;Template20240306102852.csv&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="104.53125px" height="135px"&gt;2&lt;/TD&gt;&lt;TD width="102.703125px" height="135px"&gt;Data loaded in RevPro Successfully - Success: 10000 Failed: 0&lt;/TD&gt;&lt;TD width="40px" height="135px"&gt;Y&lt;/TD&gt;&lt;TD width="86.078125px" height="135px"&gt;24&lt;/TD&gt;&lt;TD width="124.296875px" height="135px"&gt;1r7&lt;/TD&gt;&lt;TD width="157.625px" height="135px"&gt;SUCCESS&lt;/TD&gt;&lt;TD width="108.390625px" height="135px"&gt;Template&lt;/TD&gt;&lt;TD width="240.234375px" height="135px"&gt;Template20240306102852.csv&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="104.53125px" height="135px"&gt;3&lt;/TD&gt;&lt;TD width="102.703125px" height="135px"&gt;Data loaded in RevPro Successfully - Success: 10000 Failed: 0&lt;/TD&gt;&lt;TD width="40px" height="135px"&gt;Y&lt;/TD&gt;&lt;TD width="86.078125px" height="135px"&gt;24&lt;/TD&gt;&lt;TD width="124.296875px" height="135px"&gt;1577&lt;/TD&gt;&lt;TD width="157.625px" height="135px"&gt;SUCCESS&lt;/TD&gt;&lt;TD width="108.390625px" height="135px"&gt;Template&lt;/TD&gt;&lt;TD width="240.234375px" height="135px"&gt;Template20240306102852.csv&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="104.53125px" height="135px"&gt;4&lt;/TD&gt;&lt;TD width="102.703125px" height="135px"&gt;Data loaded in RevPro Successfully - Success: 10000 Failed: 0&lt;/TD&gt;&lt;TD width="40px" height="135px"&gt;Y&lt;/TD&gt;&lt;TD width="86.078125px" height="135px"&gt;24&lt;/TD&gt;&lt;TD width="124.296875px" height="135px"&gt;16577&lt;/TD&gt;&lt;TD width="157.625px" height="135px"&gt;SUCCESS&lt;/TD&gt;&lt;TD width="108.390625px" height="135px"&gt;Template&lt;/TD&gt;&lt;TD width="240.234375px" height="135px"&gt;Template20240306102852.csv&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;Here is an emulation of the compliant JSON.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| makeresults
| eval _raw = "{\"content\" : {
    \"List of Batches Processed\" : [ {
      \"P_REQUEST_ID\" : \"177\",
      \"P_BATCH_ID\" : \"1\",
      \"P_TEMPLATE\" : \"Template\",
      \"P_PERIOD\" : \"24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }, {
      \"P_REQUEST_ID\" : \"1r7\",
      \"P_BATCH_ID\" : \"2\",
      \"P_TEMPLATE\" : \"Template\",
      \"P_PERIOD\" : \"24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }, {
      \"P_REQUEST_ID\" : \"1577\",
      \"P_BATCH_ID\" : \"3\",
      \"P_TEMPLATE\" : \"Template\",
      \"P_PERIOD\" : \"24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }, {
      \"P_REQUEST_ID\" : \"16577\",
      \"P_BATCH_ID\" : \"4\",
      \"P_TEMPLATE\" : \"Template\",
      \"P_PERIOD\" : \"24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }]
  }
}"
``` data emulation above ```&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 15 Mar 2024 07:31:45 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-extract-multiple-JSON-array/m-p/680798#M232680</guid>
      <dc:creator>yuanliu</dc:creator>
      <dc:date>2024-03-15T07:31:45Z</dc:date>
    </item>
    <item>
      <title>Re: How to extract multiple JSON array?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-extract-multiple-JSON-array/m-p/680816#M232682</link>
      <description>&lt;P&gt;The value which we are seeing it is in single corellationId.so i want to display like&lt;/P&gt;&lt;TABLE border="1" width="100.0030064414748%"&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD width="25%"&gt;correlationID&lt;/TD&gt;&lt;TD width="25%"&gt;BatchId&lt;/TD&gt;&lt;TD width="12.5%"&gt;RequestID&lt;/TD&gt;&lt;TD width="12.5%"&gt;Status&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="25%"&gt;125dfe5&lt;/TD&gt;&lt;TD width="25%"&gt;1&lt;BR /&gt;2&lt;BR /&gt;3&lt;/TD&gt;&lt;TD width="12.5%"&gt;117&lt;BR /&gt;112|&lt;BR /&gt;1156&lt;/TD&gt;&lt;TD width="12.5%"&gt;Success&lt;BR /&gt;Success&lt;BR /&gt;Success&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="25%"&gt;32435sf53&lt;/TD&gt;&lt;TD width="25%"&gt;1&lt;BR /&gt;2&lt;/TD&gt;&lt;TD width="12.5%"&gt;&lt;P&gt;324&lt;BR /&gt;536&lt;/P&gt;&lt;P&gt;643&lt;/P&gt;&lt;/TD&gt;&lt;TD width="12.5%"&gt;&lt;P&gt;Success&lt;BR /&gt;Success&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="25%"&gt;&amp;nbsp;&lt;/TD&gt;&lt;TD width="25%"&gt;&amp;nbsp;&lt;/TD&gt;&lt;TD width="12.5%"&gt;&amp;nbsp;&lt;/TD&gt;&lt;TD width="12.5%"&gt;&amp;nbsp;&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;</description>
      <pubDate>Fri, 15 Mar 2024 10:24:10 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-extract-multiple-JSON-array/m-p/680816#M232682</guid>
      <dc:creator>karthi2809</dc:creator>
      <dc:date>2024-03-15T10:24:10Z</dc:date>
    </item>
    <item>
      <title>Re: How to extract multiple JSON array?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-extract-multiple-JSON-array/m-p/680891#M232700</link>
      <description>&lt;P&gt;I don't see a correlationID in your sample data. &amp;nbsp;Is it a root node in JSON or is it in contents as well? &amp;nbsp;I will assume a root JSON. (See my new emulation below.)&lt;/P&gt;&lt;P&gt;Importantly, I am speculating that you want to group all values of these three fields by correlationID. &amp;nbsp;Is this the requirement? &amp;nbsp;I will assume yes. &amp;nbsp;Normally, data people will want to go the path in my previous comment because you don't want to mix-and-match BatchId, RequestID, and Status. &amp;nbsp;Do you care whether the order are mixed up? &amp;nbsp;The result display you illustrated doesn't answer this question.&lt;/P&gt;&lt;P&gt;I will assume that you do care about order of appearance. (But I do want to warn you that three ordered list of &amp;gt; 100 values are not good for users. &amp;nbsp;I, for one, would hate to look at such a table.) &amp;nbsp;If so, you must answer yet more important questions: Do your data contain identical triplets (BatchId, RequestID, Status) with any given correlationID? &amp;nbsp;If they do, do you care to preserve all triplets? &amp;nbsp;If you want to preserve all triplets, do you care about the order of events that carry them? &amp;nbsp;Or do you want to filter out all duplicates? &amp;nbsp;If you want to remove duplicate triplets, do you care about the order of events that carry them? &amp;nbsp;If you care about the order, what are the criteria to order them?&lt;/P&gt;&lt;P&gt;See, volunteers in this board have no intimate knowledge about your dataset or your use case. &amp;nbsp;Illustrating a subset of data in text is an excellent start. &amp;nbsp;But you still need to define the problem to the extent another person who don't have your knowledge to sieve through your data and arrive at the same conclusion as you would get - all without SPL. &amp;nbsp;If the other person is to read your mind, 9 times out of 10 the mindreader will be wrong.&lt;/P&gt;&lt;P&gt;Below I give an emulation that includes a correlationID as root node:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| makeresults
| eval _raw = "{\"content\" : {
    \"List of Batches Processed\" : [ {
      \"P_REQUEST_ID\" : \"177\",
      \"P_BATCH_ID\" : \"1\",
      \"P_TEMPLATE\" : \"Template\",
      \"P_PERIOD\" : \"24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }, {
      \"P_REQUEST_ID\" : \"1r7\",
      \"P_BATCH_ID\" : \"2\",
      \"P_TEMPLATE\" : \"Template\",
      \"P_PERIOD\" : \"24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }, {
      \"P_REQUEST_ID\" : \"1577\",
      \"P_BATCH_ID\" : \"3\",
      \"P_TEMPLATE\" : \"Template\",
      \"P_PERIOD\" : \"24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }, {
      \"P_REQUEST_ID\" : \"16577\",
      \"P_BATCH_ID\" : \"4\",
      \"P_TEMPLATE\" : \"Template\",
      \"P_PERIOD\" : \"24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }]
  },
  \"correlationID\": \"125dfe5\"
}"
| spath
``` data emulation above ```&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 15 Mar 2024 23:42:35 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-extract-multiple-JSON-array/m-p/680891#M232700</guid>
      <dc:creator>yuanliu</dc:creator>
      <dc:date>2024-03-15T23:42:35Z</dc:date>
    </item>
    <item>
      <title>Re: How to extract multiple JSON array?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-extract-multiple-JSON-array/m-p/680892#M232701</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.splunk.com/t5/user/viewprofilepage/user-id/33901"&gt;@yuanliu&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I attached with correlationId .So if we extract the result the result go beyond pagination in the table as well right.&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;{
  "correlationId" : "490cfba0e9f3c770b40",
  "message" : "Processed all revenueData",
  "tracePoint" : "FLOW",
  "priority" : "INFO",
  "category" : "prc-api",
  "elapsed" : 472,
  "locationInfo" : {
    "lineInFile" : "205",
    "component" : "json-logger:logger",
    "fileName" : "G.xml",
    "rootContainer" : "syncFlow"
  },
  "timestamp" : "2024-03-06T20:57:17.119Z",
  "content" : {
    "List of Batches Processed" : [ {
      "P_REQUEST_ID" : "1005377",
      "P_BATCH_ID" : "1",
      "P_TEMPLATE" : "Template",
      "P_PERIOD" : "MAR-24",
      "P_MORE_BATCHES_EXISTS" : "Y",
      "P_FILE_NAME" : "Template20240306102852.csv",
      "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0",
      "P_RETURN_STATUS" : "SUCCESS"
    }, {
      "P_REQUEST_ID" : "1005177",
      "P_BATCH_ID" : "2",
      "P_TEMPLATE" : "Template",
      "P_PERIOD" : "MAR-24",
      "P_MORE_BATCHES_EXISTS" : "Y",
      "P_FILE_NAME" : "Template20240306102959.csv",
      "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0",
      "P_RETURN_STATUS" : "SUCCESS"
    }, {
      "P_REQUEST_ID" : "1005377",
      "P_BATCH_ID" : "3",
      "P_TEMPLATE" : "Template",
      "P_PERIOD" : "MAR-24",
      "P_MORE_BATCHES_EXISTS" : "Y",
      "P_ZUORA_FILE_NAME" : "Template20240306103103.csv",
      "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0",
      "P_RETURN_STATUS" : "SUCCESS"
    }, {
      "P_REQUEST_ID" : "1005377",
      "P_BATCH_ID" : "4",
      "P_TEMPLATE" : "Template",
      "P_PERIOD" : "MAR-24",
      "P_MORE_BATCHES_EXISTS" : "Y",
      "P_ZUORA_FILE_NAME" : "Template20240306103205.csv",
      "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0",
      "P_RETURN_STATUS" : "SUCCESS"
    }, {
      "P_REQUEST_ID" : "1005377",
      "P_BATCH_ID" : "5",
      "P_TEMPLATE" : "Template",
      "P_PERIOD" : "MAR-24",
      "P_MORE_BATCHES_EXISTS" : "Y",
      "P_FILE_NAME" : "Template20240306103306.csv",
      "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0",
      "P_RETURN_STATUS" : "SUCCESS"
    }, {
      "P_REQUEST_ID" : "100532177",
      "P_BATCH_ID" : "6",
      "P_TEMPLATE" : "ATVI_Transaction_Template",
      "P_PERIOD" : "MAR-24",
      "P_MORE_BATCHES_EXISTS" : "Y",
      "P_ZUORA_FILE_NAME" : "rev_ATVI_Transaction_Template20240306103407.csv",
      "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0",
      "P_RETURN_STATUS" : "SUCCESS"
    }&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 16 Mar 2024 00:01:42 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-extract-multiple-JSON-array/m-p/680892#M232701</guid>
      <dc:creator>karthi2809</dc:creator>
      <dc:date>2024-03-16T00:01:42Z</dc:date>
    </item>
    <item>
      <title>Re: How to extract multiple JSON array?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-extract-multiple-JSON-array/m-p/680903#M232703</link>
      <description>&lt;P&gt;Having more accurate representation of data is definitely an improvement. &amp;nbsp;But you still need to answer the questions about data characteristics &amp;nbsp;(lots of possibilities regarding triplets P_BATCH_ID, P_REQUEST_ID, and P_RETURN_STATUS), and those about desired results (why multiple columns with ordered lists, should there be dedup of triplets, order of triplets, etc.) &amp;nbsp;Because each combination requires a different solution, and can give you very different results. &amp;nbsp;Other people's mind-reading is more wrong than correct.&lt;/P&gt;&lt;P&gt;Let me try two mind-readings to illustrate.&lt;/P&gt;&lt;P&gt;First, you want to preserve every triplet even if they repeat, and you want to present them in the order of event arrival as well as in the order they appear inside each event, except being grouped by correlationId. &amp;nbsp;Absolutely no dedup. (Although this looks to have the least commands, the "solution" is the most demanding in memory.)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| rename content."List of Batches Processed"{}.* as *
| fields P_BATCH_ID P_REQUEST_ID P_RETURN_STATUS correlationId
| stats list(P_*) as * by correlationId
``` mind-reading #1 ```&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Using a composite emulation based on samples you provided (see end of this post), you will get&lt;/P&gt;&lt;TABLE&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD&gt;correlationId&lt;/TD&gt;&lt;TD&gt;&lt;DIV class=""&gt;BATCH_ID&lt;/DIV&gt;&lt;/TD&gt;&lt;TD&gt;&lt;DIV class=""&gt;REQUEST_ID&lt;/DIV&gt;&lt;/TD&gt;&lt;TD&gt;&lt;DIV class=""&gt;RETURN_STATUS&lt;/DIV&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;490cfba0e9f3c770b40&lt;/TD&gt;&lt;TD&gt;&lt;DIV class=""&gt;1&lt;/DIV&gt;&lt;DIV class=""&gt;2&lt;/DIV&gt;&lt;DIV class=""&gt;3&lt;/DIV&gt;&lt;DIV class=""&gt;4&lt;/DIV&gt;&lt;DIV class=""&gt;1&lt;/DIV&gt;&lt;DIV class=""&gt;2&lt;/DIV&gt;&lt;DIV class=""&gt;3&lt;/DIV&gt;&lt;DIV class=""&gt;4&lt;/DIV&gt;&lt;DIV class=""&gt;5&lt;/DIV&gt;&lt;DIV class=""&gt;6&lt;/DIV&gt;&lt;/TD&gt;&lt;TD&gt;&lt;DIV class=""&gt;177&lt;/DIV&gt;&lt;DIV class=""&gt;1r7&lt;/DIV&gt;&lt;DIV class=""&gt;1577&lt;/DIV&gt;&lt;DIV class=""&gt;16577&lt;/DIV&gt;&lt;DIV class=""&gt;1005377&lt;/DIV&gt;&lt;DIV class=""&gt;1005177&lt;/DIV&gt;&lt;DIV class=""&gt;1005377&lt;/DIV&gt;&lt;DIV class=""&gt;1005377&lt;/DIV&gt;&lt;DIV class=""&gt;1005377&lt;/DIV&gt;&lt;DIV class=""&gt;100532177&lt;/DIV&gt;&lt;/TD&gt;&lt;TD&gt;&lt;DIV class=""&gt;SUCCESS&lt;/DIV&gt;&lt;DIV class=""&gt;SUCCESS&lt;/DIV&gt;&lt;DIV class=""&gt;SUCCESS&lt;/DIV&gt;&lt;DIV class=""&gt;SUCCESS&lt;/DIV&gt;&lt;DIV class=""&gt;SUCCESS&lt;/DIV&gt;&lt;DIV class=""&gt;SUCCESS&lt;/DIV&gt;&lt;DIV class=""&gt;SUCCESS&lt;/DIV&gt;&lt;DIV class=""&gt;SUCCESS&lt;/DIV&gt;&lt;DIV class=""&gt;SUCCESS&lt;/DIV&gt;&lt;DIV class=""&gt;SUCCESS&lt;/DIV&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;Does &amp;nbsp;this look like something you need?&lt;/P&gt;&lt;P&gt;Or, mind-reading 2. You don't want any&amp;nbsp;duplicate triplet; neither the order these triplets arrive with events nor the order they appear in individual events matters. &amp;nbsp;You want maximum dedup, just group by correlationId.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| spath path=content."List of Batches Processed"{}
| mvexpand content."List of Batches Processed"{}
| spath input=content."List of Batches Processed"{}
| stats count by P_BATCH_ID P_REQUEST_ID P_RETURN_STATUS correlationId
| stats list(P_*) as * by correlationId
``` mind-reading #2 ```&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The same emulation will give&lt;/P&gt;&lt;TABLE&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD&gt;correlationId&lt;/TD&gt;&lt;TD&gt;&lt;DIV class=""&gt;BATCH_ID&lt;/DIV&gt;&lt;/TD&gt;&lt;TD&gt;&lt;DIV class=""&gt;REQUEST_ID&lt;/DIV&gt;&lt;/TD&gt;&lt;TD&gt;&lt;DIV class=""&gt;RETURN_STATUS&lt;/DIV&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;490cfba0e9f3c770b40&lt;/TD&gt;&lt;TD&gt;&lt;DIV class=""&gt;1&lt;/DIV&gt;&lt;DIV class=""&gt;1&lt;/DIV&gt;&lt;DIV class=""&gt;2&lt;/DIV&gt;&lt;DIV class=""&gt;2&lt;/DIV&gt;&lt;DIV class=""&gt;3&lt;/DIV&gt;&lt;DIV class=""&gt;3&lt;/DIV&gt;&lt;DIV class=""&gt;4&lt;/DIV&gt;&lt;DIV class=""&gt;4&lt;/DIV&gt;&lt;DIV class=""&gt;5&lt;/DIV&gt;&lt;DIV class=""&gt;6&lt;/DIV&gt;&lt;/TD&gt;&lt;TD&gt;&lt;DIV class=""&gt;1005377&lt;/DIV&gt;&lt;DIV class=""&gt;177&lt;/DIV&gt;&lt;DIV class=""&gt;1005177&lt;/DIV&gt;&lt;DIV class=""&gt;1r7&lt;/DIV&gt;&lt;DIV class=""&gt;1005377&lt;/DIV&gt;&lt;DIV class=""&gt;1577&lt;/DIV&gt;&lt;DIV class=""&gt;1005377&lt;/DIV&gt;&lt;DIV class=""&gt;16577&lt;/DIV&gt;&lt;DIV class=""&gt;1005377&lt;/DIV&gt;&lt;DIV class=""&gt;100532177&lt;/DIV&gt;&lt;/TD&gt;&lt;TD&gt;&lt;DIV class=""&gt;SUCCESS&lt;/DIV&gt;&lt;DIV class=""&gt;SUCCESS&lt;/DIV&gt;&lt;DIV class=""&gt;SUCCESS&lt;/DIV&gt;&lt;DIV class=""&gt;SUCCESS&lt;/DIV&gt;&lt;DIV class=""&gt;SUCCESS&lt;/DIV&gt;&lt;DIV class=""&gt;SUCCESS&lt;/DIV&gt;&lt;DIV class=""&gt;SUCCESS&lt;/DIV&gt;&lt;DIV class=""&gt;SUCCESS&lt;/DIV&gt;&lt;DIV class=""&gt;SUCCESS&lt;/DIV&gt;&lt;DIV class=""&gt;SUCCESS&lt;/DIV&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;Note: Even though this mock result is only superficially different from the previous one, the two can be materially very different if real data contain lots of duplicate triplets.&lt;/P&gt;&lt;P&gt;As a bonus, I want to throw in a third mind-reading: You don't care about triplets at all. &amp;nbsp;You only want to know which values are present in each of P_BATCH_ID, P_REQUEST_ID, and P_RETURN_STATUS. (This one is the least demanding in memory, and computationally light.)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| rename content."List of Batches Processed"{}.* as *
| fields P_BATCH_ID P_REQUEST_ID P_RETURN_STATUS correlationId
| stats values(P_*) as * by correlationId
``` mind-reading extremo ```&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The emulated data will give&lt;/P&gt;&lt;TABLE&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD&gt;correlationId&lt;/TD&gt;&lt;TD&gt;&lt;DIV class=""&gt;BATCH_ID&lt;/DIV&gt;&lt;/TD&gt;&lt;TD&gt;&lt;DIV class=""&gt;REQUEST_ID&lt;/DIV&gt;&lt;/TD&gt;&lt;TD&gt;RETURN_STATUS&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;490cfba0e9f3c770b40&lt;/TD&gt;&lt;TD&gt;&lt;DIV class=""&gt;1&lt;/DIV&gt;&lt;DIV class=""&gt;2&lt;/DIV&gt;&lt;DIV class=""&gt;3&lt;/DIV&gt;&lt;DIV class=""&gt;4&lt;/DIV&gt;&lt;DIV class=""&gt;5&lt;/DIV&gt;&lt;DIV class=""&gt;6&lt;/DIV&gt;&lt;/TD&gt;&lt;TD&gt;&lt;DIV class=""&gt;1005177&lt;/DIV&gt;&lt;DIV class=""&gt;100532177&lt;/DIV&gt;&lt;DIV class=""&gt;1005377&lt;/DIV&gt;&lt;DIV class=""&gt;1577&lt;/DIV&gt;&lt;DIV class=""&gt;16577&lt;/DIV&gt;&lt;DIV class=""&gt;177&lt;/DIV&gt;&lt;DIV class=""&gt;1r7&lt;/DIV&gt;&lt;/TD&gt;&lt;TD&gt;SUCCESS&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;Is this closer to what you want?&lt;/P&gt;&lt;P&gt;The above three very different results are derived from assuming that the two sample/mock JSON data contain identical correlationId, like emulated below. &amp;nbsp;They all kind of fit into the mock result table you showed. &amp;nbsp;How can volunteers tell?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| makeresults
| fields - _*
| eval data = mvappend("{  \"correlationId\" : \"490cfba0e9f3c770b40\",
 \"content\" : {
    \"List of Batches Processed\" : [ {
      \"P_REQUEST_ID\" : \"177\",
      \"P_BATCH_ID\" : \"1\",
      \"P_TEMPLATE\" : \"Template\",
      \"P_PERIOD\" : \"24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }, {
      \"P_REQUEST_ID\" : \"1r7\",
      \"P_BATCH_ID\" : \"2\",
      \"P_TEMPLATE\" : \"Template\",
      \"P_PERIOD\" : \"24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }, {
      \"P_REQUEST_ID\" : \"1577\",
      \"P_BATCH_ID\" : \"3\",
      \"P_TEMPLATE\" : \"Template\",
      \"P_PERIOD\" : \"24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }, {
      \"P_REQUEST_ID\" : \"16577\",
      \"P_BATCH_ID\" : \"4\",
      \"P_TEMPLATE\" : \"Template\",
      \"P_PERIOD\" : \"24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }]
  }
}", "{
  \"correlationId\" : \"490cfba0e9f3c770b40\",
  \"message\" : \"Processed all revenueData\",
  \"tracePoint\" : \"FLOW\",
  \"priority\" : \"INFO\",
  \"category\" : \"prc-api\",
  \"elapsed\" : 472,
  \"locationInfo\" : {
    \"lineInFile\" : \"205\",
    \"component\" : \"json-logger:logger\",
    \"fileName\" : \"G.xml\",
    \"rootContainer\" : \"syncFlow\"
  },
  \"timestamp\" : \"2024-03-06T20:57:17.119Z\",
  \"content\" : {
    \"List of Batches Processed\" : [ {
      \"P_REQUEST_ID\" : \"1005377\",
      \"P_BATCH_ID\" : \"1\",
      \"P_TEMPLATE\" : \"Template\",
      \"P_PERIOD\" : \"MAR-24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_FILE_NAME\" : \"Template20240306102852.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }, {
      \"P_REQUEST_ID\" : \"1005177\",
      \"P_BATCH_ID\" : \"2\",
      \"P_TEMPLATE\" : \"Template\",
      \"P_PERIOD\" : \"MAR-24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_FILE_NAME\" : \"Template20240306102959.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }, {
      \"P_REQUEST_ID\" : \"1005377\",
      \"P_BATCH_ID\" : \"3\",
      \"P_TEMPLATE\" : \"Template\",
      \"P_PERIOD\" : \"MAR-24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_ZUORA_FILE_NAME\" : \"Template20240306103103.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }, {
      \"P_REQUEST_ID\" : \"1005377\",
      \"P_BATCH_ID\" : \"4\",
      \"P_TEMPLATE\" : \"Template\",
      \"P_PERIOD\" : \"MAR-24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_ZUORA_FILE_NAME\" : \"Template20240306103205.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }, {
      \"P_REQUEST_ID\" : \"1005377\",
      \"P_BATCH_ID\" : \"5\",
      \"P_TEMPLATE\" : \"Template\",
      \"P_PERIOD\" : \"MAR-24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_FILE_NAME\" : \"Template20240306103306.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }, {
      \"P_REQUEST_ID\" : \"100532177\",
      \"P_BATCH_ID\" : \"6\",
      \"P_TEMPLATE\" : \"ATVI_Transaction_Template\",
      \"P_PERIOD\" : \"MAR-24\",
      \"P_MORE_BATCHES_EXISTS\" : \"Y\",
      \"P_ZUORA_FILE_NAME\" : \"rev_ATVI_Transaction_Template20240306103407.csv\",
      \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
      \"P_RETURN_STATUS\" : \"SUCCESS\"
    }")
| mvexpand data
| rename data AS _raw
| spath
``` data emulation above ```&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 16 Mar 2024 07:28:50 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-extract-multiple-JSON-array/m-p/680903#M232703</guid>
      <dc:creator>yuanliu</dc:creator>
      <dc:date>2024-03-16T07:28:50Z</dc:date>
    </item>
    <item>
      <title>Re: How to extract multiple JSON array?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-extract-multiple-JSON-array/m-p/681007#M232734</link>
      <description>&lt;P&gt;Thanks you made my day.I need to show in the dashbaord table how to use table after stats and i am getting warning message&lt;/P&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;'list' command: Limit of '100' for values reached. Additional values may have been truncated or ignored.&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;P&gt;And your mind-reading one and three is working as expected result and i have a queries that my content list have 134 batch_ID .But the splunk extracted and shows 26 counts, rest of the things are not showing .how can i handle this.I need to fix that issue.Will this need to be extract from props and transform.conf file while indexing the data.Please help me to fix it.&lt;/P&gt;</description>
      <pubDate>Mon, 18 Mar 2024 17:32:43 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-extract-multiple-JSON-array/m-p/681007#M232734</guid>
      <dc:creator>karthi2809</dc:creator>
      <dc:date>2024-03-18T17:32:43Z</dc:date>
    </item>
    <item>
      <title>Re: How to extract multiple JSON array?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-extract-multiple-JSON-array/m-p/681311#M232818</link>
      <description>&lt;P&gt;If you get Limit of '100', you must be picking mind-reading #1 as you didn't pick #2. &amp;nbsp;That is just the problem with &lt;A href="https://docs.splunk.com/Documentation/Splunk/list/SearchReference/Multivaluefunctions#list.28.26lt.3Bvalue.26gt.3B.29" target="_blank" rel="noopener"&gt;list&lt;/A&gt;. &amp;nbsp;You can increase this limit somewhat (see&amp;nbsp;&lt;A href="https://docs.splunk.com/Documentation/Splunk/latest/admin/Limitsconf#.5Bstats.7Csistats.5D" target="_blank" rel="noopener"&gt;[stats|sistats]&lt;/A&gt;). &amp;nbsp;But be very careful.&lt;/P&gt;&lt;P&gt;As to BATCH_ID, I still don't know what 134 and 26 mean. &amp;nbsp;One correlationId? &amp;nbsp;All events? &amp;nbsp;Is it because of the 100 limit on&amp;nbsp;&lt;SPAN&gt;list_maxsize? &amp;nbsp;You should probably post a new question with proper set up and detailed explanation.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 20 Mar 2024 05:06:45 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-extract-multiple-JSON-array/m-p/681311#M232818</guid>
      <dc:creator>yuanliu</dc:creator>
      <dc:date>2024-03-20T05:06:45Z</dc:date>
    </item>
  </channel>
</rss>

