Getting Data In

How do you convert JSON into a table?

ShagVT
Path Finder

I have an application that generates a JSON status record that looks something like this:

{"currentTime": "2019-01-31T13:02:23.622","applicationStartTime": "2019-01-23T18:37:23.613","version": "2018.04.0.10","buildTime": "2018-11-07T20:29:13Z",
"statistics": {"mostRecentMessageTimestamp": "1969-12-31T16:00",
    "metrics": {
        "total": {"batched": 155,"skipped": 3,"dberror": 7,"alreadyHandled": 18,"total": 183},
        "queue1": {"batched": 10,"skipped": 0,"dberror": 1,"alreadyHandled": 6,"total": 17},
        "queue2": {"batched": 20,"skipped": 0,"dberror": 2,"alreadyHandled": 0,"total": 22},
        "queue3": {"batched": 30,"skipped": 3,"dberror": 0,"alreadyHandled": 12,"total": 45},
        "queue4": {"batched": 40,"skipped": 0,"dberror": 0,"alreadyHandled": 0,"total": 40},
        "queue5": {"batched": 50,"skipped": 0,"dberror": 4,"alreadyHandled": 0,"total": 54},
        "unknown": {"batched": 5,"skipped": 0,"dberror": 0,"alreadyHandled": 0,"total": 5}}},
"database": {"status": {"activeConnections": 2,"idleConnections": 48,"threadsAwaitingConnection": 0,"totalConnections": 50}}}

This is repeated by each instance of the application. I've been asked to put together a dashboard that includes the metrics data in nice human-readable tables.

It's easy enough to isolate the most recent record:

index=myindex sourcetype=status host=abc111 | head 1

It's really not clear to me how to turn this into a table though. I'd like something that looks approximately like this:

Queue      Batched   Skipped  DBError   Handled   Total
total      155       3        7         18        183
queue1     10        0        1         6         17
queue2     20        0        2         0         22
queue3     30        3        0         12        45
queue4     40        0        0         0         40
queue5     50        0        0         4         54
unknown    5         0        0         0         5

Notice that both the total row and the total column are already in the data and don't need to be calculated.

Thank you so much for any help you can offer in this!

0 Karma
1 Solution

kamlesh_vaghela
SplunkTrust
SplunkTrust

@ShagVT

Can you please try this?

YOUR_SEARCH | kv 
| fields statistics.metrics.*.* 
| rename statistics.metrics.* as * 
| eval FieldList="" 
| foreach * 
    [ eval FieldList=if("<<MATCHSTR>>"!="FieldList",FieldList.","."<<MATCHSTR>>","") ] 
| eval FieldList=split(FieldList,",") 
| mvexpand FieldList 
| eval Queue=mvindex(split(FieldList,"."),0),Column=mvindex(split(FieldList,"."),1) 
| eval value="" 
| foreach * 
    [ eval value=if("<<FIELD>>"==Queue.".".Column,'<<FIELD>>',value),{Column}=value ] | stats values(batched) as Batched, values(skipped) as Skipped values(dberror) as DBerror values(alreadyHandled) as Handled values(total) as Total by Queue

My Sample Search:

| makeresults 
| eval _raw="{\"currentTime\":\"2019-01-31T13:02:23.622\",\"applicationStartTime\":\"2019-01-23T18:37:23.613\",\"version\":\"2018.04.0.10\",\"buildTime\":\"2018-11-07T20:29:13Z\",\"statistics\":{\"mostRecentMessageTimestamp\":\"1969-12-31T16:00\",\"metrics\":{\"total\":{\"batched\":155,\"skipped\":3,\"dberror\":7,\"alreadyHandled\":18,\"total\":183},\"queue1\":{\"batched\":10,\"skipped\":0,\"dberror\":1,\"alreadyHandled\":6,\"total\":17},\"queue2\":{\"batched\":20,\"skipped\":0,\"dberror\":2,\"alreadyHandled\":0,\"total\":22},\"queue3\":{\"batched\":30,\"skipped\":3,\"dberror\":0,\"alreadyHandled\":12,\"total\":45},\"queue4\":{\"batched\":40,\"skipped\":0,\"dberror\":0,\"alreadyHandled\":0,\"total\":40},\"queue5\":{\"batched\":50,\"skipped\":0,\"dberror\":4,\"alreadyHandled\":0,\"total\":54},\"unknown\":{\"batched\":5,\"skipped\":0,\"dberror\":0,\"alreadyHandled\":0,\"total\":5}}},\"database\":{\"status\":{\"activeConnections\":2,\"idleConnections\":48,\"threadsAwaitingConnection\":0,\"totalConnections\":50}}}" 
| kv 
| fields statistics.metrics.*.* 
| rename statistics.metrics.* as * 
| eval FieldList="" 
| foreach * 
    [ eval FieldList=if("<<MATCHSTR>>"!="FieldList",FieldList.","."<<MATCHSTR>>","") ] 
| eval FieldList=split(FieldList,",") 
| mvexpand FieldList 
| eval Queue=mvindex(split(FieldList,"."),0),Column=mvindex(split(FieldList,"."),1) 
| eval value="" 
| foreach * 
    [ eval value=if("<<FIELD>>"==Queue.".".Column,'<<FIELD>>',value),{Column}=value ] | stats values(batched) as Batched, values(skipped) as Skipped values(dberror) as DBerror values(alreadyHandled) as Handled values(total) as Total by Queue

Thanks

View solution in original post

kamlesh_vaghela
SplunkTrust
SplunkTrust

@ShagVT

Can you please try this?

YOUR_SEARCH | kv 
| fields statistics.metrics.*.* 
| rename statistics.metrics.* as * 
| eval FieldList="" 
| foreach * 
    [ eval FieldList=if("<<MATCHSTR>>"!="FieldList",FieldList.","."<<MATCHSTR>>","") ] 
| eval FieldList=split(FieldList,",") 
| mvexpand FieldList 
| eval Queue=mvindex(split(FieldList,"."),0),Column=mvindex(split(FieldList,"."),1) 
| eval value="" 
| foreach * 
    [ eval value=if("<<FIELD>>"==Queue.".".Column,'<<FIELD>>',value),{Column}=value ] | stats values(batched) as Batched, values(skipped) as Skipped values(dberror) as DBerror values(alreadyHandled) as Handled values(total) as Total by Queue

My Sample Search:

| makeresults 
| eval _raw="{\"currentTime\":\"2019-01-31T13:02:23.622\",\"applicationStartTime\":\"2019-01-23T18:37:23.613\",\"version\":\"2018.04.0.10\",\"buildTime\":\"2018-11-07T20:29:13Z\",\"statistics\":{\"mostRecentMessageTimestamp\":\"1969-12-31T16:00\",\"metrics\":{\"total\":{\"batched\":155,\"skipped\":3,\"dberror\":7,\"alreadyHandled\":18,\"total\":183},\"queue1\":{\"batched\":10,\"skipped\":0,\"dberror\":1,\"alreadyHandled\":6,\"total\":17},\"queue2\":{\"batched\":20,\"skipped\":0,\"dberror\":2,\"alreadyHandled\":0,\"total\":22},\"queue3\":{\"batched\":30,\"skipped\":3,\"dberror\":0,\"alreadyHandled\":12,\"total\":45},\"queue4\":{\"batched\":40,\"skipped\":0,\"dberror\":0,\"alreadyHandled\":0,\"total\":40},\"queue5\":{\"batched\":50,\"skipped\":0,\"dberror\":4,\"alreadyHandled\":0,\"total\":54},\"unknown\":{\"batched\":5,\"skipped\":0,\"dberror\":0,\"alreadyHandled\":0,\"total\":5}}},\"database\":{\"status\":{\"activeConnections\":2,\"idleConnections\":48,\"threadsAwaitingConnection\":0,\"totalConnections\":50}}}" 
| kv 
| fields statistics.metrics.*.* 
| rename statistics.metrics.* as * 
| eval FieldList="" 
| foreach * 
    [ eval FieldList=if("<<MATCHSTR>>"!="FieldList",FieldList.","."<<MATCHSTR>>","") ] 
| eval FieldList=split(FieldList,",") 
| mvexpand FieldList 
| eval Queue=mvindex(split(FieldList,"."),0),Column=mvindex(split(FieldList,"."),1) 
| eval value="" 
| foreach * 
    [ eval value=if("<<FIELD>>"==Queue.".".Column,'<<FIELD>>',value),{Column}=value ] | stats values(batched) as Batched, values(skipped) as Skipped values(dberror) as DBerror values(alreadyHandled) as Handled values(total) as Total by Queue

Thanks

anooshac
Communicator

Hi @kamlesh_vaghela , I have similar requirement, i referred your solution. But i am not able to print the row in the any one value has the data 0. Is there anything to be added in the query to avoid such condition?

Tags (3)
0 Karma

ShagVT
Path Finder

Wow - this is really good. I've never come across the foreach command and things like <> and <> ... Now I need to spend some time trying to understand why this works.

0 Karma

ShagVT
Path Finder

Wow - this is really good. I've never come across the foreach command and things like <> and <> ... Now I need to spend some time trying to understand why this works.

0 Karma
Get Updates on the Splunk Community!

App Platform's 2025 Year in Review: A Year of Innovation, Growth, and Community

As we step into 2026, it’s the perfect moment to reflect on what an extraordinary year 2025 was for the Splunk ...

Operationalizing Entity Risk Score with Enterprise Security 8.3+

Overview Enterprise Security 8.3 introduces a powerful new feature called “Entity Risk Scoring” (ERS) for ...

Unlock Database Monitoring with Splunk Observability Cloud

  In today’s fast-paced digital landscape, even minor database slowdowns can disrupt user experiences and ...