Splunk Search

Is there a way to limit memory usage of the stats command?

marcusnilssonmr
Path Finder

I am executing a search like the following:

index=x sourcetype=t | eval {Property} = Value | stats latest by ID

This takes memory proportional to the amount of rows, and for all time, that means more than 10G of memory. I understand that stats needs to keep an accumulator, but I would like it to limit the amount of memory and start using disk instead, meaning a slower search, but less memory usage.

Can I limit the memory usage of stats this way?

1 Solution

MuS
Legend

Hi marcusnilssonmrgreen,

Have a look at the docs http://docs.splunk.com/Documentation/Splunk/6.2.3/Admin/Limitsconf and read it carefully. It is possible to limit memory usage, but it also has some negative implications like swapping ...

max_mem_usage_mb = <non-negative integer>
* Provides a limitation to the amount of RAM a batch of events or results will use
  in the memory of search processes.
* Operates on an estimation of memory use which is not exact.
* The limitation is applied in an unusual way; if the number of results or events
  exceeds maxresults, AND the estimated memory exceeds this limit, the data is
  spilled to disk.
* This means, as a general rule, lower limits will cause a search to use more disk
  I/O and less RAM, and be somewhat slower, but should cause the same results to
  typically come out of the search in the end.
* This limit is applied currently to a number, but not all search processors.
  However, more will likely be added as it proves necessary.
* The number is thus effectively a ceiling on batch size for many components of
  search for all searches run on this system.
* 0 will specify the size to be unbounded.  In this case searches may be allowed to
  grow to arbitrary sizes.

Hope that helps ...

cheers, MuS

View solution in original post

MuS
Legend

Hi marcusnilssonmrgreen,

Have a look at the docs http://docs.splunk.com/Documentation/Splunk/6.2.3/Admin/Limitsconf and read it carefully. It is possible to limit memory usage, but it also has some negative implications like swapping ...

max_mem_usage_mb = <non-negative integer>
* Provides a limitation to the amount of RAM a batch of events or results will use
  in the memory of search processes.
* Operates on an estimation of memory use which is not exact.
* The limitation is applied in an unusual way; if the number of results or events
  exceeds maxresults, AND the estimated memory exceeds this limit, the data is
  spilled to disk.
* This means, as a general rule, lower limits will cause a search to use more disk
  I/O and less RAM, and be somewhat slower, but should cause the same results to
  typically come out of the search in the end.
* This limit is applied currently to a number, but not all search processors.
  However, more will likely be added as it proves necessary.
* The number is thus effectively a ceiling on batch size for many components of
  search for all searches run on this system.
* 0 will specify the size to be unbounded.  In this case searches may be allowed to
  grow to arbitrary sizes.

Hope that helps ...

cheers, MuS

Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...