Getting Data In

Unable to parse nested json

aayushisplunk1
Path Finder

Hello All,

I am facing issues parsing the json data to form the required table.
The json file is being pulled in the splunk as a single event. I am able to fetch the fields separately but unable to correlate them as illustrated in json.
Please let me know if it is doable. if yes, then how ?

Query:
source=source1 host=host1 index=index1 sourcetype=_json1
| head 1
| table issues{}.fields{}.project, issues{}.changelog.histories{}.author, issues{}.changelog.histories{}.created

Input json:
alt text

The required output table:
alt text

Tags (1)
0 Karma
1 Solution

kamlesh_vaghela
SplunkTrust
SplunkTrust

@aayushisplunk1

Can you please try this?

YOUR_SEARCH 
| spath path=issues{} output=issues 
| mvexpand issues 
| fields issues 
| eval _raw=issues 
| extract 
| rename changelog.histories{}.* as * ,fields.* as *
| eval temp = mvzip(author,created) | mvexpand temp | eval author=mvindex(split(temp,","),0), created=mvindex(split(temp,","),1) | table project author created

Sample Search:

| makeresults 
| eval _raw="{\"issues\":[{\"changelog\":{\"histories\":[{\"author\":\"ABC\",\"created\":\"123\"},{\"author\":\"XYZ\",\"created\":\"456\"}]},\"fields\":{\"project\":\"test1\"},\"id\":\"1\"},{\"changelog\":{\"histories\":[{\"author\":\"ABC\",\"created\":\"1234\"},{\"author\":\"XYZ4\",\"created\":\"456\"}]},\"fields\":{\"project\":\"test1\"},\"id\":\"2\"},{\"changelog\":{\"histories\":[{\"author\":\"ABC2\",\"created\":\"1232\"},{\"author\":\"XYZ2\",\"created\":\"4562\"}]},\"fields\":{\"project\":\"test12\"},\"id\":\"3\"}]}" 
| extract 
| spath path=issues{} output=issues 
| mvexpand issues 
| fields issues 
| eval _raw=issues 
| extract 
| rename changelog.histories{}.* as * ,fields.* as *
| eval temp = mvzip(author,created) | mvexpand temp | eval author=mvindex(split(temp,","),0), created=mvindex(split(temp,","),1) | table project author created

Thanks

View solution in original post

0 Karma

Sukisen1981
Champion

try this:

| makeresults
 | eval raw="issues:    [
     {
        changelog:    {    
          histories:    [    
           {    
              author:    ABC
              created:    123    

           }    
           {    
              author:    XYZ
              created:     456

           }    
         ]    
       }    
        fields:    {
          project: test1
       }    
      id: 1
     }
     {
        changelog:    {    
          histories:    [    
           {    
              author:    ABC
              created:    1234    

           }    
           {    
              author:    XYZ4
              created:     456

           }    
         ]    
       }    
        fields:    {
          project: test1
       }    
       id : 2
     }
     {
        changelog:    {    
          histories:    [    
           {    
              author:    ABC2
              created:    1232    

           }    
           {    
              author:    XYZ2
              created:     4562

           }    
         ]    
       }    
        fields:    {
          project: test12 
       }    
      id: 3
     }
             ]"
| eval raw=split(raw,"id")
| mvexpand raw
|rex field=raw "author:(?<author>.*)" max_match=0 
| rex field=raw "created:(?<created>.*)" max_match=0
|eval x=mvzip(author,created)
| rex field=raw "project:(?<project>.*)" max_match=0

|  fields - _time
| fields project,x
| mvexpand x
| rex field=x "(?<author>.*?)," max_match=0| rex field=x ",(?<created>.*)" max_match=0
| fields project,author,created
0 Karma

kamlesh_vaghela
SplunkTrust
SplunkTrust

@aayushisplunk1

Can you please try this?

YOUR_SEARCH 
| spath path=issues{} output=issues 
| mvexpand issues 
| fields issues 
| eval _raw=issues 
| extract 
| rename changelog.histories{}.* as * ,fields.* as *
| eval temp = mvzip(author,created) | mvexpand temp | eval author=mvindex(split(temp,","),0), created=mvindex(split(temp,","),1) | table project author created

Sample Search:

| makeresults 
| eval _raw="{\"issues\":[{\"changelog\":{\"histories\":[{\"author\":\"ABC\",\"created\":\"123\"},{\"author\":\"XYZ\",\"created\":\"456\"}]},\"fields\":{\"project\":\"test1\"},\"id\":\"1\"},{\"changelog\":{\"histories\":[{\"author\":\"ABC\",\"created\":\"1234\"},{\"author\":\"XYZ4\",\"created\":\"456\"}]},\"fields\":{\"project\":\"test1\"},\"id\":\"2\"},{\"changelog\":{\"histories\":[{\"author\":\"ABC2\",\"created\":\"1232\"},{\"author\":\"XYZ2\",\"created\":\"4562\"}]},\"fields\":{\"project\":\"test12\"},\"id\":\"3\"}]}" 
| extract 
| spath path=issues{} output=issues 
| mvexpand issues 
| fields issues 
| eval _raw=issues 
| extract 
| rename changelog.histories{}.* as * ,fields.* as *
| eval temp = mvzip(author,created) | mvexpand temp | eval author=mvindex(split(temp,","),0), created=mvindex(split(temp,","),1) | table project author created

Thanks

0 Karma

Sukisen1981
Champion

what kind of default fields are you getting under interesting fields and have you tried spath?
If you want more precise help, can you please post your event sample as text so that we can re-use it ?

0 Karma

aayushisplunk1
Path Finder

Hello,

Thank you for your quick response!
As for your queries:

The default fields i am getting are given as below:
issues{}.fields{}.project,
issues{}.changelog.histories{}.author,
issues{}.changelog.histories{}.created,
issues{}.id

i tried using spath but i guess it will not be able to help much as i already have the required fields. It is just that that i am unable to correlate these field values as per the json.

json in text:

*{

issues: [
{
changelog: {

histories: [

{

author: ABC
created: 123

        }   
        {   
             author:    XYZ
             created:    456

        }   
        ]   
    }   
     fields:    {
         project: test1
    }   
     id: 1
    }
    {
     changelog: {   
         histories: [   
        {   
             author:    ABC
             created:   1234    

        }   
        {   
             author:    XYZ4
             created:    456

        }   
        ]   
    }   
     fields:    {
         project: test1
    }   
      id : 2
    }
    {
     changelog: {   
         histories: [   
        {   
             author:    ABC2
             created:   1232    

        }   
        {   
             author:    XYZ2
             created:    4562

        }   
        ]   
    }   
     fields:    {
         project: test12 
    }   
     id: 3
    }
            ]

}*

0 Karma

kamlesh_vaghela
SplunkTrust
SplunkTrust

@aayushisplunk1

Can you please share raw event? Your provided event is not valid JSON.

0 Karma

aayushisplunk1
Path Finder

@kamlesh_vaghela

Raw event:

{"issues":[{"changelog":{"histories":[{"author":"ABC","created":"123"},{"author":"XYZ","created":"456"}]},"fields":{"project":"test1"},"id":"1"},{"changelog":{"histories":[{"author":"ABC","created":"1234"},{"author":"XYZ4","created":"456"}]},"fields":{"project":"test1"},"id":"2"},{"changelog":{"histories":[{"author":"ABC2","created":"1232"},{"author":"XYZ2","created":"4562"}]},"fields":{"project":"test12"},"id":"3"}]}

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

[Puzzles] Solve, Learn, Repeat: Matching cron expressions

This puzzle (first published here) is based on matching timestamps to cron expressions.All the timestamps ...

Design, Compete, Win: Submit Your Best Splunk Dashboards for a .conf26 Pass

Hello Splunkers,  We’re excited to kick off a Splunk Dashboard contest! We know that dashboards are a primary ...

May 2026 Splunk Expert Sessions: Security & Observability

Level Up Your Operations: May 2026 Splunk Expert Sessions Whether you are refining your security posture or ...