<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: How to convert a json into table with some unstructured data? in Splunk Search</title>
    <link>https://community.splunk.com/t5/Splunk-Search/How-to-convert-a-json-into-table-with-some-unstructured-data/m-p/545970#M154745</link>
    <description>&lt;P&gt;I want to fetch logs of only service B based on the userid(which is common field) fetched from service A. I don't want to merge these two data sets.&lt;/P&gt;&lt;P&gt;Also, as i am new to this...can you please give an example on this?&amp;nbsp;&lt;/P&gt;&lt;P&gt;Like after extracting fields from rex, how do i use them in eval to join the conditions.&lt;/P&gt;</description>
    <pubDate>Tue, 30 Mar 2021 08:11:41 GMT</pubDate>
    <dc:creator>jonthree</dc:creator>
    <dc:date>2021-03-30T08:11:41Z</dc:date>
    <item>
      <title>How to convert a json into table with some unstructured data?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-convert-a-json-into-table-with-some-unstructured-data/m-p/545205#M154401</link>
      <description>&lt;P&gt;So I have two different services where an API call starts from service A and propagates to service B. I want to trace the errors for this and creating a dashboard to show the consolidated errors.&lt;/P&gt;&lt;P&gt;Logs are as follows.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Service A logs:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;10.0.9.456 - - 23/Mar/2021:17:29:52 +0000 "POST Error occured in service A status 400 bad request referenceid 1615 msg Some bad request error occured in application B &lt;STRONG&gt;status &lt;/STRONG&gt;400 &lt;STRONG&gt;url&lt;/STRONG&gt; /test/user/myuserfield/authorize?service=myservicename&amp;amp;serviceT=myserviceTypeid&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Service B logs:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;{
  "userId": "/myuserfield",
  "transactionId": "abcd",
  "timestamp": "2021-03-24T15:41:25.770Z",
  "eventName": "myevent",
  "component": "mycomponent",
  "response": {
    "statusCode": "400",
    "detail": {
      "reason": "Bad Request"
    }
  },
  "http": {
    "request": {
      "method": "POST",
      "path": "http://dummyurl",
      "queryParameters": {
        "serviceId": [
          "myservicename"
        ],
        "serviceType": [
          "myservicetype"
        ]
      }
    }
  }
}&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;So the mapping of fields between these two service is as follows:&lt;/P&gt;&lt;TABLE border="1" width="100%"&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD width="50%"&gt;Service A Field&lt;/TD&gt;&lt;TD width="50%"&gt;Service B Field&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="50%"&gt;status&lt;/TD&gt;&lt;TD width="50%"&gt;statusCode&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="50%"&gt;service&lt;/TD&gt;&lt;TD width="50%"&gt;serviceId&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;serviceType&lt;/TD&gt;&lt;TD&gt;serviceT&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="50%"&gt;user&lt;/TD&gt;&lt;TD width="50%"&gt;userId&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;I have tried to use subsearch extract fields from service A :&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;index=*serviceB*&lt;BR /&gt;| spath | rename userId as user, http.request.queryParameters.serviceId{} as service, http.request.queryParameters.serviceType{} as serviceT&lt;BR /&gt;| search [search index=*serviceA* | rex "/test/user(?&amp;lt;user&amp;gt;/\w+)+/authorize.*\?+service+\=(?&amp;lt;service&amp;gt;\w+)+\&amp;amp;+serviceT\=(?&amp;lt;serviceT&amp;gt;.*)\"" | dedup&amp;nbsp;user service serviceT&lt;BR /&gt;| fields user service serviceT]&lt;/P&gt;&lt;P&gt;Above expression provides me the logs of Service B which are propagated from service A. What i want now is to display the data in Tabular format for better readability. So i have two questions:&lt;/P&gt;&lt;P&gt;1. Am i going with the right approach by using subsearch here ? Is the expression seems to be correct and best possible solution?&lt;/P&gt;&lt;P&gt;2. Above expression provides me different error logs for different users in json format. How do i convert these to tabular format having userId, time, status etc. ? I also want to filter my table based on multiple status filters (like 4XX, 5XX etc)..how to achieve that?&lt;/P&gt;</description>
      <pubDate>Wed, 24 Mar 2021 19:43:09 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-convert-a-json-into-table-with-some-unstructured-data/m-p/545205#M154401</guid>
      <dc:creator>jonthree</dc:creator>
      <dc:date>2021-03-24T19:43:09Z</dc:date>
    </item>
    <item>
      <title>Re: How to convert a json into table with some unstructured data?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-convert-a-json-into-table-with-some-unstructured-data/m-p/545228#M154406</link>
      <description>&lt;P&gt;Your approach of using service A results to filter the outer search of service B is fine, but anywhere you have subsearches, you may need to consider the size of results coming back from the subsearch to constrain the outer one. If the subsearch result set is small then this is probably a good way to filter service B search.&lt;/P&gt;&lt;P&gt;There are a number of possible solutions to 'merge' two or more data sets. I tend to favour using stats to combine the data types, as that does not have any subsearch limitations. It's not always appropriate, as it will often depend on the data set sizes you're searching but uses the principle of&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;search (data_set_1_characteristics) OR (data_set_2_characteristics)
| eval user=if(DS1,user_from_DS1, user_from_DS2), ...
| eval .... further conditional eval to munge data ....
| stats values(*) as * by X Y Z&lt;/LI-CODE&gt;&lt;P&gt;and then after you have the merged data sets, you then manipulate into the form needed to visualise.&lt;/P&gt;&lt;P&gt;If you have used the spath expression in your example, then you have the JSON extracted fields, so you can table them as needed.&lt;/P&gt;&lt;P&gt;As for filtering on status, I'm assuming this is via a dashboard. Then you need to set up the appropriate inputs, either with dropdowns with fixed status values, or populated from values in the data and then in your queries, use the tokens in where or search clauses as part of your query.&lt;/P&gt;&lt;P&gt;See the XML guide and token usage in dashboards to help you with dashboards&lt;/P&gt;&lt;P&gt;&lt;A href="https://docs.splunk.com/Documentation/Splunk/8.1.2/Viz/PanelreferenceforSimplifiedXML" target="_blank"&gt;https://docs.splunk.com/Documentation/Splunk/8.1.2/Viz/PanelreferenceforSimplifiedXML&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="https://docs.splunk.com/Documentation/Splunk/8.1.2/Viz/tokens" target="_blank"&gt;https://docs.splunk.com/Documentation/Splunk/8.1.2/Viz/tokens&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 24 Mar 2021 21:42:33 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-convert-a-json-into-table-with-some-unstructured-data/m-p/545228#M154406</guid>
      <dc:creator>bowesmana</dc:creator>
      <dc:date>2021-03-24T21:42:33Z</dc:date>
    </item>
    <item>
      <title>Re: How to convert a json into table with some unstructured data?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-convert-a-json-into-table-with-some-unstructured-data/m-p/545970#M154745</link>
      <description>&lt;P&gt;I want to fetch logs of only service B based on the userid(which is common field) fetched from service A. I don't want to merge these two data sets.&lt;/P&gt;&lt;P&gt;Also, as i am new to this...can you please give an example on this?&amp;nbsp;&lt;/P&gt;&lt;P&gt;Like after extracting fields from rex, how do i use them in eval to join the conditions.&lt;/P&gt;</description>
      <pubDate>Tue, 30 Mar 2021 08:11:41 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-convert-a-json-into-table-with-some-unstructured-data/m-p/545970#M154745</guid>
      <dc:creator>jonthree</dc:creator>
      <dc:date>2021-03-30T08:11:41Z</dc:date>
    </item>
    <item>
      <title>Re: How to convert a json into table with some unstructured data?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-convert-a-json-into-table-with-some-unstructured-data/m-p/545986#M154753</link>
      <description>&lt;P&gt;Your existing search is already giving you the data, so you just need to table the data as you need it. Look at your list of fields if you run the search in verbose mode. You already seem to know what you are doing having used spath and renaming the fields.&lt;/P&gt;&lt;P&gt;You say your search is giving you the logs you need, so just use the following command to put these into a table.&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| table _time user service serviceT response.statusCode&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 30 Mar 2021 09:36:48 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-convert-a-json-into-table-with-some-unstructured-data/m-p/545986#M154753</guid>
      <dc:creator>bowesmana</dc:creator>
      <dc:date>2021-03-30T09:36:48Z</dc:date>
    </item>
    <item>
      <title>Re: How to convert a json into table with some unstructured data?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-convert-a-json-into-table-with-some-unstructured-data/m-p/546020#M154762</link>
      <description>&lt;P&gt;Thanks..one last problem that i am facing is this:&amp;nbsp;&lt;/P&gt;&lt;P&gt;When i convert the data into tabular format, i am getting double values in my table:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;TABLE border="1" width="100%"&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD width="33.333333333333336%" height="24px"&gt;time&lt;/TD&gt;&lt;TD width="33.333333333333336%" height="24px"&gt;userid&lt;/TD&gt;&lt;TD width="33.333333333333336%" height="24px"&gt;status&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD width="33.333333333333336%" height="24px"&gt;&lt;SPAN&gt;2021-03-29 18:46:05&lt;/SPAN&gt;&lt;/TD&gt;&lt;TD width="33.333333333333336%" height="24px"&gt;testuserid&lt;BR /&gt;testuserid&lt;/TD&gt;&lt;TD width="33.333333333333336%" height="24px"&gt;401&lt;BR /&gt;401&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have used below query to extract the results:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;index=*serviceB*
| spath | rename userId as user, http.request.queryParameters.serviceId{} as service, http.request.queryParameters.serviceType{} as serviceT, rename statusCode as status 
| search [search index=*serviceA* | rex "Status/s+(?&amp;lt;status&amp;gt;/d+)/test/user(?&amp;lt;user&amp;gt;/\w+)+/authorize.*\?+service+\=(?&amp;lt;service&amp;gt;\w+)+\&amp;amp;+serviceT\=(?&amp;lt;serviceT&amp;gt;.*)\"" | dedup user service serviceT status
| fields user service serviceT status]| table _time user status&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Does using &lt;STRONG&gt;spath&lt;/STRONG&gt; alone extracts all the fields again here ?&amp;nbsp; &amp;nbsp;I am not using spatch individually for these fields.&lt;/P&gt;&lt;P&gt;Or is there any other reason?&lt;/P&gt;</description>
      <pubDate>Tue, 30 Mar 2021 14:33:00 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-convert-a-json-into-table-with-some-unstructured-data/m-p/546020#M154762</guid>
      <dc:creator>jonthree</dc:creator>
      <dc:date>2021-03-30T14:33:00Z</dc:date>
    </item>
    <item>
      <title>Re: How to convert a json into table with some unstructured data?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-convert-a-json-into-table-with-some-unstructured-data/m-p/546089#M154778</link>
      <description>&lt;P&gt;I suspect it depends on how you are ingesting your JSON data for serviceB. If you are using INDEXED_EXTRACTIONS=json then that's most likely the reason. In that case, your indexing process has already extracted all the JSON fields AND indexed them. Doing the spath with then auto extract all fields in the first 5000 bytes of JSON, so probably now creating multi value fields.&lt;/P&gt;&lt;P&gt;Do you really need to INDEX all fields. That will increase the size of your index and is not necessary.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 30 Mar 2021 21:33:04 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-convert-a-json-into-table-with-some-unstructured-data/m-p/546089#M154778</guid>
      <dc:creator>bowesmana</dc:creator>
      <dc:date>2021-03-30T21:33:04Z</dc:date>
    </item>
    <item>
      <title>Re: How to convert a json into table with some unstructured data?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/How-to-convert-a-json-into-table-with-some-unstructured-data/m-p/546090#M154779</link>
      <description>&lt;P&gt;For the time being you can always add&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;| eval user=mvdedup(user), status=mvdedup(status)&lt;/LI-CODE&gt;&lt;P&gt;to remove the duplicates. If the fields are already index (you can check this by running index=serviceB and looking at the verbose fields list) then there is no need to run the spath statement, which will probably also fix your issue.&lt;/P&gt;</description>
      <pubDate>Tue, 30 Mar 2021 21:37:41 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/How-to-convert-a-json-into-table-with-some-unstructured-data/m-p/546090#M154779</guid>
      <dc:creator>bowesmana</dc:creator>
      <dc:date>2021-03-30T21:37:41Z</dc:date>
    </item>
  </channel>
</rss>

