Splunk Search

Extract JSON objects

vishaltaneja070
Motivator

How can i extract this:
"properties": {"nextLink": null,
"columns": [
{"name": "Cost", "type": "Number"},
{"name": "Date", "type": "Number"},
{"name": "Charge", "type": "String"},
{"name": "Publisher", "type": "String"},
{"name": "Resource", "type": "String"},
{"name": "Resource", "type": "String"},
{"name": "Service", "type": "String"},
{"name": "Standard", "type": "String"},
"rows": [
[2.06, 20210807, "usage", "uuuu", "hhh", "gd", "bandwidth", "azy", "HHH"],
[2.206, 20210807, "usage", "uuuhhh", "ggg", "gd", "bandwidth", "new", "YYY"] ]

No of columns can be increased.

 

 

Labels (1)
Tags (1)
0 Karma
1 Solution

ITWhisperer
SplunkTrust
SplunkTrust

Assuming columns is supposed to be an array of name/type pairs (added closing ]) and that there are supposed to be 9 of these pairs (added Comment), and that you have a properly formatted JSON string (added surrounding and closing braces), then you could do something like this

 

| makeresults 
| eval _raw="{\"properties\": {\"nextLink\": null,
\"columns\": [
{\"name\": \"Cost\", \"type\": \"Number\"},
{\"name\": \"Date\", \"type\": \"Number\"},
{\"name\": \"Charge\", \"type\": \"String\"},
{\"name\": \"Publisher\", \"type\": \"String\"},
{\"name\": \"Resource\", \"type\": \"String\"},
{\"name\": \"Resource\", \"type\": \"String\"},
{\"name\": \"Service\", \"type\": \"String\"},
{\"name\": \"Standard\", \"type\": \"String\"},
{\"name\": \"Comment\", \"type\": \"String\"}],
\"rows\": [
[2.06, 20210807, \"usage\", \"uuuu\", \"hhh\", \"gd\", \"bandwidth\", \"azy\", \"HHH\"],
[2.206, 20210807, \"usage\", \"uuuhhh\", \"ggg\", \"gd\", \"bandwidth\", \"new\", \"YYY\"]] }}"



| spath path="properties.columns{}.name" output=columnnames
| spath path="properties.rows{}{}" output=rows
| streamstats count as event 
| mvexpand rows
| streamstats count as row by event
| eval index=(row-1)%mvcount(columnnames)
| eval name=mvindex(columnnames,index)
| eval {name}=rows
| eval row=floor((row-1)/mvcount(columnnames))
| fields - columnnames name index rows
| stats values(*) as * by row event

 

 

View solution in original post

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

Assuming columns is supposed to be an array of name/type pairs (added closing ]) and that there are supposed to be 9 of these pairs (added Comment), and that you have a properly formatted JSON string (added surrounding and closing braces), then you could do something like this

 

| makeresults 
| eval _raw="{\"properties\": {\"nextLink\": null,
\"columns\": [
{\"name\": \"Cost\", \"type\": \"Number\"},
{\"name\": \"Date\", \"type\": \"Number\"},
{\"name\": \"Charge\", \"type\": \"String\"},
{\"name\": \"Publisher\", \"type\": \"String\"},
{\"name\": \"Resource\", \"type\": \"String\"},
{\"name\": \"Resource\", \"type\": \"String\"},
{\"name\": \"Service\", \"type\": \"String\"},
{\"name\": \"Standard\", \"type\": \"String\"},
{\"name\": \"Comment\", \"type\": \"String\"}],
\"rows\": [
[2.06, 20210807, \"usage\", \"uuuu\", \"hhh\", \"gd\", \"bandwidth\", \"azy\", \"HHH\"],
[2.206, 20210807, \"usage\", \"uuuhhh\", \"ggg\", \"gd\", \"bandwidth\", \"new\", \"YYY\"]] }}"



| spath path="properties.columns{}.name" output=columnnames
| spath path="properties.rows{}{}" output=rows
| streamstats count as event 
| mvexpand rows
| streamstats count as row by event
| eval index=(row-1)%mvcount(columnnames)
| eval name=mvindex(columnnames,index)
| eval {name}=rows
| eval row=floor((row-1)/mvcount(columnnames))
| fields - columnnames name index rows
| stats values(*) as * by row event

 

 

0 Karma
Get Updates on the Splunk Community!

Splunk Enterprise Security 8.x: The Essential Upgrade for Threat Detection, ...

 Prepare to elevate your security operations with the powerful upgrade to Splunk Enterprise Security 8.x! This ...

Get Early Access to AI Playbook Authoring: Apply for the Alpha Private Preview ...

Passionate about security automation? Apply now to our AI Playbook Authoring Alpha private preview ...

Reduce and Transform Your Firewall Data with Splunk Data Management

Managing high-volume firewall data has always been a challenge. Noisy events and verbose traffic logs often ...