I am executing a search like the following:
index=x sourcetype=t | eval {Property} = Value | stats latest by ID
This takes memory proportional to the amount of rows, and for all time, that means more than 10G of memory. I understand that stats needs to keep an accumulator, but I would like it to limit the amount of memory and start using disk instead, meaning a slower search, but less memory usage.
Can I limit the memory usage of stats this way?
Hi marcusnilssonmrgreen,
Have a look at the docs http://docs.splunk.com/Documentation/Splunk/6.2.3/Admin/Limitsconf and read it carefully. It is possible to limit memory usage, but it also has some negative implications like swapping ...
max_mem_usage_mb = <non-negative integer>
* Provides a limitation to the amount of RAM a batch of events or results will use
in the memory of search processes.
* Operates on an estimation of memory use which is not exact.
* The limitation is applied in an unusual way; if the number of results or events
exceeds maxresults, AND the estimated memory exceeds this limit, the data is
spilled to disk.
* This means, as a general rule, lower limits will cause a search to use more disk
I/O and less RAM, and be somewhat slower, but should cause the same results to
typically come out of the search in the end.
* This limit is applied currently to a number, but not all search processors.
However, more will likely be added as it proves necessary.
* The number is thus effectively a ceiling on batch size for many components of
search for all searches run on this system.
* 0 will specify the size to be unbounded. In this case searches may be allowed to
grow to arbitrary sizes.
Hope that helps ...
cheers, MuS
Hi marcusnilssonmrgreen,
Have a look at the docs http://docs.splunk.com/Documentation/Splunk/6.2.3/Admin/Limitsconf and read it carefully. It is possible to limit memory usage, but it also has some negative implications like swapping ...
max_mem_usage_mb = <non-negative integer>
* Provides a limitation to the amount of RAM a batch of events or results will use
in the memory of search processes.
* Operates on an estimation of memory use which is not exact.
* The limitation is applied in an unusual way; if the number of results or events
exceeds maxresults, AND the estimated memory exceeds this limit, the data is
spilled to disk.
* This means, as a general rule, lower limits will cause a search to use more disk
I/O and less RAM, and be somewhat slower, but should cause the same results to
typically come out of the search in the end.
* This limit is applied currently to a number, but not all search processors.
However, more will likely be added as it proves necessary.
* The number is thus effectively a ceiling on batch size for many components of
search for all searches run on this system.
* 0 will specify the size to be unbounded. In this case searches may be allowed to
grow to arbitrary sizes.
Hope that helps ...
cheers, MuS