Splunk Search

How to extract JSON array into a table when there is an optional key/value field in the array?

jdc8723
Engager

I have a JSONArray with embedded array and an optional field. I'd like to print the data into a table, with each field extracted and accessible individually. The query I have (using mvzip/mvexpand) works if I ignore the optional field. But when I include it in the query, it limits my results to only the number of entries that contain that optional field. And the optional field data does not get lined up with the rest of the data that it should be associated with. I believe I just need to manufacture data for the field when it is not included, but I've tried fillnull and coalesce and neither does what I'm trying to do. Here's an example data and query. The optional field is called "secondary".

| stats count | fields - count
| eval _raw = "
{\"list\":[
{\"attempts\":12,\"failReasons\":[],\"medianDur\":161,\"primary\":\"ActivityA\"},
{\"attempts\":13,\"failReasons\":[],\"medianDur\":162,\"primary\":\"ActivityB1\",\"secondary\":\"section1\"},
{\"attempts\":14,\"failReasons\":[],\"medianDur\":163,\"primary\":\"ActivityC\"},
{\"attempts\":15,\"failReasons\":[],\"medianDur\":164,\"primary\":\"ActivityD\"},
{\"attempts\":16,\"failReasons\":[{\"count\":3,\"error\":\"internal\"}],\"medianDur\":165,\"primary\":\"ActivityE\"},
{\"attempts\":17,\"failReasons\":[],\"medianDur\":166,\"primary\":\"ActivityF\"},
{\"attempts\":18,\"failReasons\":[],\"medianDur\":167,\"primary\":\"ActivityB2\",\"secondary\":\"section2\"}]}"
| spath input=_raw | rename list{}.* as * | fillnull value="" secondary | eval zipped=mvzip( primary, mvzip( attempts, mvzip( medianDur, secondary ))) | mvexpand zipped | eval zipped=split( zipped, "," ) | eval primary=mvIndex( zipped, 0 ) | eval attempts=mvIndex( zipped, 1 ) | eval medianDur=mvIndex( zipped, 2 ) | eval secondary=mvIndex( zipped, 3 ) | table primary, secondary, attempts, medianDur

Tags (2)
0 Karma
1 Solution

kamlesh_vaghela
SplunkTrust
SplunkTrust

@jdc8723

You can make it easy by just considering each list json as difference. Can you please execute below search?

| stats count | fields - count
| eval _raw = "
{\"list\":[
{\"attempts\":12,\"failReasons\":[],\"medianDur\":161,\"primary\":\"ActivityA\"},
{\"attempts\":13,\"failReasons\":[],\"medianDur\":162,\"primary\":\"ActivityB1\",\"secondary\":\"section1\"},
{\"attempts\":14,\"failReasons\":[],\"medianDur\":163,\"primary\":\"ActivityC\"},
{\"attempts\":15,\"failReasons\":[],\"medianDur\":164,\"primary\":\"ActivityD\"},
{\"attempts\":16,\"failReasons\":[{\"count\":3,\"error\":\"internal\"}],\"medianDur\":165,\"primary\":\"ActivityE\"},
{\"attempts\":17,\"failReasons\":[],\"medianDur\":166,\"primary\":\"ActivityF\"},
{\"attempts\":18,\"failReasons\":[],\"medianDur\":167,\"primary\":\"ActivityB2\",\"secondary\":\"section2\"}]}"
| spath path=list{} output=LIST | mvexpand LIST | rename LIST as  _raw | kv | table  attempts failReasons{}.* medianDur primary secondary

Happy Splunking

View solution in original post

kamlesh_vaghela
SplunkTrust
SplunkTrust

@jdc8723

You can make it easy by just considering each list json as difference. Can you please execute below search?

| stats count | fields - count
| eval _raw = "
{\"list\":[
{\"attempts\":12,\"failReasons\":[],\"medianDur\":161,\"primary\":\"ActivityA\"},
{\"attempts\":13,\"failReasons\":[],\"medianDur\":162,\"primary\":\"ActivityB1\",\"secondary\":\"section1\"},
{\"attempts\":14,\"failReasons\":[],\"medianDur\":163,\"primary\":\"ActivityC\"},
{\"attempts\":15,\"failReasons\":[],\"medianDur\":164,\"primary\":\"ActivityD\"},
{\"attempts\":16,\"failReasons\":[{\"count\":3,\"error\":\"internal\"}],\"medianDur\":165,\"primary\":\"ActivityE\"},
{\"attempts\":17,\"failReasons\":[],\"medianDur\":166,\"primary\":\"ActivityF\"},
{\"attempts\":18,\"failReasons\":[],\"medianDur\":167,\"primary\":\"ActivityB2\",\"secondary\":\"section2\"}]}"
| spath path=list{} output=LIST | mvexpand LIST | rename LIST as  _raw | kv | table  attempts failReasons{}.* medianDur primary secondary

Happy Splunking

Get Updates on the Splunk Community!

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...

Splunk MCP & Agentic AI: Machine Data Without Limits

Discover how the Splunk Model Context Protocol (MCP) Server can revolutionize the way your organization uses ...