Getting Data In

How can we speed up JSON array lookup?

vinodkd
New Member

Hi,

My JSON event is in this form:

{
TotalMemUsage : 887992,
ProcMemUsage : [
{
Name : "firefox",
PID : 758,
Usg : 228972,
},
{
Name : "eclipse",
PID : 569,
Usg : 19756,
}
]
}

I've to take average of each process's memory usage and draw a chart.
For the time being, I use following query.

my_search 
| rename ProcMemUsage{}.Name as PNAME | rename ProcMemUsage{}.Usg as PUSAGE
| eval x=mvzip(PNAME,PUSAGE) 
| mvexpand x|eval x=split(x,",")
| eval P_NAME=mvindex(x,0)
| eval P_USAGE = mvindex(x,1)
| stats avg(P_USAGE) as MU by P_NAME 
| sort -MU | fields MU,P_NAME
|chart avg(MU) as AvgMU by P_NAME 
| sort -AvgMU

But it takes lot of time to complete the operation. (Approx 5 minutes with only 30K records.).
Is there any way to optimize it? Can we use jsonutils somehow?

Tags (3)
0 Karma

vinodkd
New Member

@Iguinn: It doesn't give a noticeable speedup :'(

0 Karma

lguinn2
Legend

I don't really think this will make things faster, but it might help a little:

my_search 
| rename ProcMemUsage{}.Name as PNAME | rename ProcMemUsage{}.Usg as PUSAGE
| fields PNAME PUSAGE
| eval x=mvzip(PNAME,PUSAGE) 
| mvexpand x
| eval x=split(x,",")
| eval P_NAME=mvindex(x,0)
| eval P_USAGE = mvindex(x,1)
| chart avg(P_USAGE) as AvgMU by P_NAME 
| sort -AvgMU

It's also a bit cleaner. Post back with your results.

0 Karma
Get Updates on the Splunk Community!

Data Management Digest – December 2025

Welcome to the December edition of Data Management Digest! As we continue our journey of data innovation, the ...

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...