Getting Data In

Unable to parse nested json

aayushisplunk1
Path Finder

Hello All,

I am facing issues parsing the json data to form the required table.
The json file is being pulled in the splunk as a single event. I am able to fetch the fields separately but unable to correlate them as illustrated in json.
Please let me know if it is doable. if yes, then how ?

Query:
source=source1 host=host1 index=index1 sourcetype=_json1
| head 1
| table issues{}.fields{}.project, issues{}.changelog.histories{}.author, issues{}.changelog.histories{}.created

Input json:
alt text

The required output table:
alt text

Tags (1)
0 Karma
1 Solution

kamlesh_vaghela
SplunkTrust
SplunkTrust

@aayushisplunk1

Can you please try this?

YOUR_SEARCH 
| spath path=issues{} output=issues 
| mvexpand issues 
| fields issues 
| eval _raw=issues 
| extract 
| rename changelog.histories{}.* as * ,fields.* as *
| eval temp = mvzip(author,created) | mvexpand temp | eval author=mvindex(split(temp,","),0), created=mvindex(split(temp,","),1) | table project author created

Sample Search:

| makeresults 
| eval _raw="{\"issues\":[{\"changelog\":{\"histories\":[{\"author\":\"ABC\",\"created\":\"123\"},{\"author\":\"XYZ\",\"created\":\"456\"}]},\"fields\":{\"project\":\"test1\"},\"id\":\"1\"},{\"changelog\":{\"histories\":[{\"author\":\"ABC\",\"created\":\"1234\"},{\"author\":\"XYZ4\",\"created\":\"456\"}]},\"fields\":{\"project\":\"test1\"},\"id\":\"2\"},{\"changelog\":{\"histories\":[{\"author\":\"ABC2\",\"created\":\"1232\"},{\"author\":\"XYZ2\",\"created\":\"4562\"}]},\"fields\":{\"project\":\"test12\"},\"id\":\"3\"}]}" 
| extract 
| spath path=issues{} output=issues 
| mvexpand issues 
| fields issues 
| eval _raw=issues 
| extract 
| rename changelog.histories{}.* as * ,fields.* as *
| eval temp = mvzip(author,created) | mvexpand temp | eval author=mvindex(split(temp,","),0), created=mvindex(split(temp,","),1) | table project author created

Thanks

View solution in original post

0 Karma

Sukisen1981
Champion

try this:

| makeresults
 | eval raw="issues:    [
     {
        changelog:    {    
          histories:    [    
           {    
              author:    ABC
              created:    123    

           }    
           {    
              author:    XYZ
              created:     456

           }    
         ]    
       }    
        fields:    {
          project: test1
       }    
      id: 1
     }
     {
        changelog:    {    
          histories:    [    
           {    
              author:    ABC
              created:    1234    

           }    
           {    
              author:    XYZ4
              created:     456

           }    
         ]    
       }    
        fields:    {
          project: test1
       }    
       id : 2
     }
     {
        changelog:    {    
          histories:    [    
           {    
              author:    ABC2
              created:    1232    

           }    
           {    
              author:    XYZ2
              created:     4562

           }    
         ]    
       }    
        fields:    {
          project: test12 
       }    
      id: 3
     }
             ]"
| eval raw=split(raw,"id")
| mvexpand raw
|rex field=raw "author:(?<author>.*)" max_match=0 
| rex field=raw "created:(?<created>.*)" max_match=0
|eval x=mvzip(author,created)
| rex field=raw "project:(?<project>.*)" max_match=0

|  fields - _time
| fields project,x
| mvexpand x
| rex field=x "(?<author>.*?)," max_match=0| rex field=x ",(?<created>.*)" max_match=0
| fields project,author,created
0 Karma

kamlesh_vaghela
SplunkTrust
SplunkTrust

@aayushisplunk1

Can you please try this?

YOUR_SEARCH 
| spath path=issues{} output=issues 
| mvexpand issues 
| fields issues 
| eval _raw=issues 
| extract 
| rename changelog.histories{}.* as * ,fields.* as *
| eval temp = mvzip(author,created) | mvexpand temp | eval author=mvindex(split(temp,","),0), created=mvindex(split(temp,","),1) | table project author created

Sample Search:

| makeresults 
| eval _raw="{\"issues\":[{\"changelog\":{\"histories\":[{\"author\":\"ABC\",\"created\":\"123\"},{\"author\":\"XYZ\",\"created\":\"456\"}]},\"fields\":{\"project\":\"test1\"},\"id\":\"1\"},{\"changelog\":{\"histories\":[{\"author\":\"ABC\",\"created\":\"1234\"},{\"author\":\"XYZ4\",\"created\":\"456\"}]},\"fields\":{\"project\":\"test1\"},\"id\":\"2\"},{\"changelog\":{\"histories\":[{\"author\":\"ABC2\",\"created\":\"1232\"},{\"author\":\"XYZ2\",\"created\":\"4562\"}]},\"fields\":{\"project\":\"test12\"},\"id\":\"3\"}]}" 
| extract 
| spath path=issues{} output=issues 
| mvexpand issues 
| fields issues 
| eval _raw=issues 
| extract 
| rename changelog.histories{}.* as * ,fields.* as *
| eval temp = mvzip(author,created) | mvexpand temp | eval author=mvindex(split(temp,","),0), created=mvindex(split(temp,","),1) | table project author created

Thanks

0 Karma

Sukisen1981
Champion

what kind of default fields are you getting under interesting fields and have you tried spath?
If you want more precise help, can you please post your event sample as text so that we can re-use it ?

0 Karma

aayushisplunk1
Path Finder

Hello,

Thank you for your quick response!
As for your queries:

The default fields i am getting are given as below:
issues{}.fields{}.project,
issues{}.changelog.histories{}.author,
issues{}.changelog.histories{}.created,
issues{}.id

i tried using spath but i guess it will not be able to help much as i already have the required fields. It is just that that i am unable to correlate these field values as per the json.

json in text:

*{

issues: [
{
changelog: {

histories: [

{

author: ABC
created: 123

        }   
        {   
             author:    XYZ
             created:    456

        }   
        ]   
    }   
     fields:    {
         project: test1
    }   
     id: 1
    }
    {
     changelog: {   
         histories: [   
        {   
             author:    ABC
             created:   1234    

        }   
        {   
             author:    XYZ4
             created:    456

        }   
        ]   
    }   
     fields:    {
         project: test1
    }   
      id : 2
    }
    {
     changelog: {   
         histories: [   
        {   
             author:    ABC2
             created:   1232    

        }   
        {   
             author:    XYZ2
             created:    4562

        }   
        ]   
    }   
     fields:    {
         project: test12 
    }   
     id: 3
    }
            ]

}*

0 Karma

kamlesh_vaghela
SplunkTrust
SplunkTrust

@aayushisplunk1

Can you please share raw event? Your provided event is not valid JSON.

0 Karma

aayushisplunk1
Path Finder

@kamlesh_vaghela

Raw event:

{"issues":[{"changelog":{"histories":[{"author":"ABC","created":"123"},{"author":"XYZ","created":"456"}]},"fields":{"project":"test1"},"id":"1"},{"changelog":{"histories":[{"author":"ABC","created":"1234"},{"author":"XYZ4","created":"456"}]},"fields":{"project":"test1"},"id":"2"},{"changelog":{"histories":[{"author":"ABC2","created":"1232"},{"author":"XYZ2","created":"4562"}]},"fields":{"project":"test12"},"id":"3"}]}

0 Karma
Get Updates on the Splunk Community!

Stay Connected: Your Guide to July and August Tech Talks, Office Hours, and Webinars!

Dive into our sizzling summer lineup for July and August Community Office Hours and Tech Talks. Scroll down to ...

Edge Processor Scaling, Energy & Manufacturing Use Cases, and More New Articles on ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Get More Out of Your Security Practice With a SIEM

Get More Out of Your Security Practice With a SIEMWednesday, July 31, 2024  |  11AM PT / 2PM ETREGISTER ...