Splunk Search

What is wrong with this stats, eval query used to count the input fields to represent in an area chart?

krs_1507
New Member

Hi,

I wanted to keep account for the memory usage of all the jobs that are running in a range from 0 to 1024G.
Like separate count for 0-32, 32-64, 64-128, >128 ranges for every 30 Minutes.

I'm trying to do it like this.

index=qvmr_soc_r groupID=socdv_data_mine  stat=Run user=$userid$  mem_req<$max_mem_req$ mem_req>$min_mem_req$   |stats mem_0_32(eval(($min_mem_req$>=0) AND ($max_mem_req$ <= 8))) as mem_0_32 |stats mem_32_64(eval(($min_mem_req$>32) AND ($max_mem_req$ <=64))) as mem_32_64 |stats mem_64_128(eval(($min_mem_req$>64) AND ($max_mem_req$ <=128))) as mem_64_128 | stats mem_above_128(eval(($min_mem_req$>128) AND ($max_mem_req$ <=1024))) as mem_above_128| timechart span=30m count by mem_req

where $userid$, $min_mem_req, and $max_mem_req are Input fields

I'm getting the below error

"Error in 'stats' command: The argument 'mem_0_32(eval((0>=0) AND (1024<= 8)))' is invalid

Can you please let me know the correct usage of this?

Thanks & Regards,

Ravi

Tags (1)
0 Karma

macadminrohit
Contributor

First of all the mem_0_32 is not a correct function for stats thats why splunk complained about it. And for counting the events based on the range of memory values, you can use the way @niketn has mentioned. Also you can try rangemap command. Run this sample search and modify according to your requirement.

| makeresults | eval min_mem_req="20,40,60,80" | eval max_mem_req="30,50,70,90" | makemv delim="," min_mem_req | makemv delim="," max_mem_req | mvexpand min_mem_req | mvexpand max_mem_req | eval range=case(min_mem_req>=0 AND max_mem_req<= 8,"mem_0_32",min_mem_req>32 AND max_mem_req<=64,"mem_32_64",min_mem_req>64 AND max_mem_req<=128,"mem_64_128",min_mem_req>128 AND max_mem_req<=1024,"mem_above_128") | fillnull range value="No Range" | timechart count by range
0 Karma

martin_mueller
SplunkTrust
SplunkTrust

To count events by a set of ranges you can generically do this:

... | bin span=32 mem_req | timechart span=30m count by mem_req

niketn
Legend

@krs_1507 your requirement and your query does not add up.

If you just need to chart various ranges of memory usage every 30 minute you can try something like the following run anywhere search based on Splunk's _internal index (field used is date_second instead of mem_req):

 index=_internal sourcetype=splunkd  date_second>=0 date_second<60
| timechart span=30m count(eval(date_second>=0 AND date_second<20)) as date_second_0_to_20 count(eval(date_second>=20 AND date_second<40)) as date_second_20_to_40 count(eval(date_second>=40 AND date_second<60)) as date_second_40_to_60

PS: date_second is just for run anywhere example and as obvious its value will always be between 0-60.

____________________________________________
| makeresults | eval message= "Happy Splunking!!!"
0 Karma

krs_1507
New Member

I even did try this way, but it's not working

index=qvmr_soc_r groupID=socdv_data_mine  stat=Run user=$userid$  mem_req<$max_mem_req$ mem_req>$min_mem_req$  | eval bool_1 = if(($min_mem_req$>=0) AND ($max_mem_req$ <= 32), 1, 0)   |  eval bool_2 = if(($min_mem_req$>32) AND ($max_mem_req$ <= 64), 1, 0) | eval bool_3 = if(($min_mem_req$>64) AND ($max_mem_req$ <= 128), 1, 0) | eval bool_4 = if(($min_mem_req$>128) AND ($max_mem_req$ <= 1024), 1, 0) | stats sum(bool_1) as mem_0_32 | stats sum(bool_2) as mem_32_64 | stats sum(bool_3) as mem_64_128 | stats sum(bool_4) as mem_above_128 |  timechart span=30m count by mem_req
0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...