On a Linux system, is there a way that I can leverage operating system facilities to limit the aggregated physical memory usage of all Splunk processes?
Basically, I would like to configure the operating system to constrain the total physical memory usage of Splunk to a ceiling that ensures that it will not overrun the available system resources, even if that means killing processes.
It's possible to use Linux "control groups" to apply a ceiling to the memory use of any group of processes via various means.
Control groups were introduced originally to start meeting the needs of "containers" or in-operating-system virtualization goals like virtuozzo, openvzn, kvm and so on, but have since found uses for many potential goals.
Here's article which describes steps which can be used on current releases of Linux (e.g. RHEL/CentOS 7 or Debian 😎 to limit all memory used by a particular userID (eg user splunk). http://wiki.splunk.com/Community:Limiting_Splunk_Memory_Linux_ControlGroups#Limiting_Splunk_Memory_w...
It's possible to use Linux "control groups" to apply a ceiling to the memory use of any group of processes via various means.
Control groups were introduced originally to start meeting the needs of "containers" or in-operating-system virtualization goals like virtuozzo, openvzn, kvm and so on, but have since found uses for many potential goals.
Here's article which describes steps which can be used on current releases of Linux (e.g. RHEL/CentOS 7 or Debian 😎 to limit all memory used by a particular userID (eg user splunk). http://wiki.splunk.com/Community:Limiting_Splunk_Memory_Linux_ControlGroups#Limiting_Splunk_Memory_w...