Monitoring Splunk

Lower Memory usage

JoshuaJohn
Contributor

I have a query that is being blocked from retrieving all relevant data due to policy to keep queries under 500mb, is there anyway I could optimize this query?

index=Nitro_server=xs_json earliest=-48h 
| rename hdr.nitro as nitro_loc 
| join type=inner 
    [ inputlookup nitro_loc.csv 
    | search TimeZone="C" OR "CDT" 
    | eval nitro_loc=case(len(STORE)==4,STORE,len(STORE)==3,"0".STORE,len(STORE)==2,"00".STORE,len(STORE)==1,"000".STORE) ] 
| search Model="*v10*" nitro_loc="*" FirmwareVersion = * 
| dedup "Mac_Address" 
| stats count by FirmwareVersion TimeZone

Any suggestions would be appreciated!

0 Karma

somesoni2
Revered Legend

What all fields are you getting from lookup nitro_loc.csv? On what field you're doing the join?

0 Karma

JoshuaJohn
Contributor

I am doing the join on nitro_loc which is a 4 digit number, and I am trying to get timezone out of the csv.

0 Karma

somesoni2
Revered Legend

Give this try

index=Nitro_server=xs_json Model="*v10*" FirmwareVersion = * earliest=-48h 
| fields hdr.nitro "Mac_Address" FirmwareVersion
| dedup "Mac_Address" 
| eval nitro_loc=tonumber('hdr.nitro') as nitro_loc 
| search nitro_loc="*"
| lookup nitro_loc.csv nitro_loc OUTPUT TimeZone
| stats count by FirmwareVersion TimeZone

changes done
1) Moved filters to base search wherever possible
2) Added fields command to only keep the fields that'll be used in search.
3) Moved dedup earlier in the search so that subsequent operations are happening on only required events
4) Instead of formatting lookup table column nitro_loc (which will require join), formatted search data field to be number and did regular lookup.

0 Karma

Ayn
Legend

Generally try to avoid join whenever possible. Have you explored just using nitro_loc.csv as a regular lookup using the lookup command here?

Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Introducing Splunk 10.0: Smarter, Faster, and More Powerful Than Ever

Now On Demand Whether you're managing complex deployments or looking to future-proof your data ...