Splunk Search

Is there a way to limit memory usage of the stats command?

marcusnilssonmr
Path Finder

I am executing a search like the following:

index=x sourcetype=t | eval {Property} = Value | stats latest by ID

This takes memory proportional to the amount of rows, and for all time, that means more than 10G of memory. I understand that stats needs to keep an accumulator, but I would like it to limit the amount of memory and start using disk instead, meaning a slower search, but less memory usage.

Can I limit the memory usage of stats this way?

1 Solution

MuS
Legend

Hi marcusnilssonmrgreen,

Have a look at the docs http://docs.splunk.com/Documentation/Splunk/6.2.3/Admin/Limitsconf and read it carefully. It is possible to limit memory usage, but it also has some negative implications like swapping ...

max_mem_usage_mb = <non-negative integer>
* Provides a limitation to the amount of RAM a batch of events or results will use
  in the memory of search processes.
* Operates on an estimation of memory use which is not exact.
* The limitation is applied in an unusual way; if the number of results or events
  exceeds maxresults, AND the estimated memory exceeds this limit, the data is
  spilled to disk.
* This means, as a general rule, lower limits will cause a search to use more disk
  I/O and less RAM, and be somewhat slower, but should cause the same results to
  typically come out of the search in the end.
* This limit is applied currently to a number, but not all search processors.
  However, more will likely be added as it proves necessary.
* The number is thus effectively a ceiling on batch size for many components of
  search for all searches run on this system.
* 0 will specify the size to be unbounded.  In this case searches may be allowed to
  grow to arbitrary sizes.

Hope that helps ...

cheers, MuS

View solution in original post

MuS
Legend

Hi marcusnilssonmrgreen,

Have a look at the docs http://docs.splunk.com/Documentation/Splunk/6.2.3/Admin/Limitsconf and read it carefully. It is possible to limit memory usage, but it also has some negative implications like swapping ...

max_mem_usage_mb = <non-negative integer>
* Provides a limitation to the amount of RAM a batch of events or results will use
  in the memory of search processes.
* Operates on an estimation of memory use which is not exact.
* The limitation is applied in an unusual way; if the number of results or events
  exceeds maxresults, AND the estimated memory exceeds this limit, the data is
  spilled to disk.
* This means, as a general rule, lower limits will cause a search to use more disk
  I/O and less RAM, and be somewhat slower, but should cause the same results to
  typically come out of the search in the end.
* This limit is applied currently to a number, but not all search processors.
  However, more will likely be added as it proves necessary.
* The number is thus effectively a ceiling on batch size for many components of
  search for all searches run on this system.
* 0 will specify the size to be unbounded.  In this case searches may be allowed to
  grow to arbitrary sizes.

Hope that helps ...

cheers, MuS

Get Updates on the Splunk Community!

Get Inspired! We’ve Got Validation that Your Hard Work is Paying Off

We love our Splunk Community and want you to feel inspired by all your hard work! Eric Fusilero, our VP of ...

What's New in Splunk Enterprise 9.4: Features to Power Your Digital Resilience

Hey Splunky People! We are excited to share the latest updates in Splunk Enterprise 9.4. In this release we ...

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...