I have a query that is being blocked from retrieving all relevant data due to policy to keep queries under 500mb, is there anyway I could optimize this query?
index=Nitro_server=xs_json earliest=-48h
| rename hdr.nitro as nitro_loc
| join type=inner
[ inputlookup nitro_loc.csv
| search TimeZone="C" OR "CDT"
| eval nitro_loc=case(len(STORE)==4,STORE,len(STORE)==3,"0".STORE,len(STORE)==2,"00".STORE,len(STORE)==1,"000".STORE) ]
| search Model="*v10*" nitro_loc="*" FirmwareVersion = *
| dedup "Mac_Address"
| stats count by FirmwareVersion TimeZone
Any suggestions would be appreciated!
What all fields are you getting from lookup nitro_loc.csv? On what field you're doing the join?
I am doing the join on nitro_loc which is a 4 digit number, and I am trying to get timezone out of the csv.
Give this try
index=Nitro_server=xs_json Model="*v10*" FirmwareVersion = * earliest=-48h
| fields hdr.nitro "Mac_Address" FirmwareVersion
| dedup "Mac_Address"
| eval nitro_loc=tonumber('hdr.nitro') as nitro_loc
| search nitro_loc="*"
| lookup nitro_loc.csv nitro_loc OUTPUT TimeZone
| stats count by FirmwareVersion TimeZone
changes done
1) Moved filters to base search wherever possible
2) Added fields command to only keep the fields that'll be used in search.
3) Moved dedup earlier in the search so that subsequent operations are happening on only required events
4) Instead of formatting lookup table column nitro_loc (which will require join), formatted search data field to be number and did regular lookup.
Generally try to avoid join
whenever possible. Have you explored just using nitro_loc.csv
as a regular lookup using the lookup
command here?