AFAIK, that setting is instance/cluster specific and can not be setup for specific roles. Why not apply the limit to all users? (guessing high usage is slowing/crashing your Splunk servers, so applying the limit to all users would probably be more helpful)
per my reseach, this is the process where you setup the limits to the entire splunk environent
* Specifies if the memory tracker is enabled.
* When set to "false" (disabled): The search is not terminated even if
the search exceeds the memory limit.
* When set to "true": Enables the memory tracker.
* Must be set to "true" to enable the "searchprocessmemoryusagethreshold"
setting or the "searchprocessmemoryusagepercentage_threshold" setting.
* Default: false
* To use this setting, the "enablememorytracker" setting must be set
* Specifies the maximum memory, in MB, that the search process can consume
* Search processes that violate the threshold are terminated.
* If the value is set to 0, then search processes are allowed to grow
unbounded in terms of in memory usage.
* Default: 4000 (4GB)
* To use this setting, the "enablememory_tracker" setting must be set
* Specifies the percent of the total memory that the search process is
entitled to consume.
* Search processes that violate the threshold percentage are terminated.
* If the value is set to zero, then splunk search processes are allowed to
grow unbounded in terms of percentage memory usage.
* Any setting larger than 100 or less than 0 is discarded and the default
value is used.
* Default: 25%
the setting is in
use this example:
[role_ninja] rtsearch = disabled importRoles = user srchFilter = something=something srchIndexesAllowed = * srchIndexesDefault = mail;main srchJobsQuota = 8 rtSrchJobsQuota = 8 srchDiskQuota = 50
and in more detail here:
hope it helps