While there can be so many reasons for memory growth, one of the reason could be increased memory usage by idle search process pool(search-launcher).
index=_introspection component=PerProcess host=<any one SH or IDX host>
| timechart span=5s sum(data.mem_used) as mem_usedMB by data.process_type useother=f usenull=f
Example
If memory usage by `search-launcher` is way higher than `search`
Then idle search process pool(search-launcher) is wasting system memory.
If you see above trend, we want to reduce idle search process pool.
There are several options to reduce idle search process pool in limits.conf
One option is to set enable_search_process_long_lifespan = false in server.conf( new option in 9.1 and above)
enable_search_process_long_lifespan = <boolean> * Controls whether the search process can have a long lifespan. * Configuring a long lifespan on a search process can optimize performance by reducing the number of new processes that are launched and old processes that are reaped, and is a more efficient use of system resources. * When set to "true": Splunk software does the following: * Suppresses increases in the configuration generation. See the 'conf_generation_include' setting for more information. * Avoids unnecessary replication of search configuration bundles. * Allows a certain number of idle search processes to live. * Sets the size of the pool of search processes. * Checks memory usage before a search process is reused. * When set to "false": The lifespan of a search process at the 50th percentile is approximately 30 seconds. * NOTE: Do not change this setting unless instructed to do so by Splunk Support. * Default: true
Why idle search process pool appears to be un-used(more idle searches compared to the actual number of searches running on peer)?