Splunk Search

Is there a way to limit memory usage of the stats command?

marcusnilssonmr
Path Finder

I am executing a search like the following:

index=x sourcetype=t | eval {Property} = Value | stats latest by ID

This takes memory proportional to the amount of rows, and for all time, that means more than 10G of memory. I understand that stats needs to keep an accumulator, but I would like it to limit the amount of memory and start using disk instead, meaning a slower search, but less memory usage.

Can I limit the memory usage of stats this way?

1 Solution

MuS
Legend

Hi marcusnilssonmrgreen,

Have a look at the docs http://docs.splunk.com/Documentation/Splunk/6.2.3/Admin/Limitsconf and read it carefully. It is possible to limit memory usage, but it also has some negative implications like swapping ...

max_mem_usage_mb = <non-negative integer>
* Provides a limitation to the amount of RAM a batch of events or results will use
  in the memory of search processes.
* Operates on an estimation of memory use which is not exact.
* The limitation is applied in an unusual way; if the number of results or events
  exceeds maxresults, AND the estimated memory exceeds this limit, the data is
  spilled to disk.
* This means, as a general rule, lower limits will cause a search to use more disk
  I/O and less RAM, and be somewhat slower, but should cause the same results to
  typically come out of the search in the end.
* This limit is applied currently to a number, but not all search processors.
  However, more will likely be added as it proves necessary.
* The number is thus effectively a ceiling on batch size for many components of
  search for all searches run on this system.
* 0 will specify the size to be unbounded.  In this case searches may be allowed to
  grow to arbitrary sizes.

Hope that helps ...

cheers, MuS

View solution in original post

MuS
Legend

Hi marcusnilssonmrgreen,

Have a look at the docs http://docs.splunk.com/Documentation/Splunk/6.2.3/Admin/Limitsconf and read it carefully. It is possible to limit memory usage, but it also has some negative implications like swapping ...

max_mem_usage_mb = <non-negative integer>
* Provides a limitation to the amount of RAM a batch of events or results will use
  in the memory of search processes.
* Operates on an estimation of memory use which is not exact.
* The limitation is applied in an unusual way; if the number of results or events
  exceeds maxresults, AND the estimated memory exceeds this limit, the data is
  spilled to disk.
* This means, as a general rule, lower limits will cause a search to use more disk
  I/O and less RAM, and be somewhat slower, but should cause the same results to
  typically come out of the search in the end.
* This limit is applied currently to a number, but not all search processors.
  However, more will likely be added as it proves necessary.
* The number is thus effectively a ceiling on batch size for many components of
  search for all searches run on this system.
* 0 will specify the size to be unbounded.  In this case searches may be allowed to
  grow to arbitrary sizes.

Hope that helps ...

cheers, MuS

Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...