Even though Splunk allows us to set a role level concurrent search jobs limit, it really does not allow us to ensure a role will have a minimum search jobs number allocated to it.
We need a way to safeguard vital business critical dashboards/searches which becomes harder and harder without such a functionality (similar to Hadoop YARN queue capacity vs max capacity)
As Splunk's usage in the organization grows more and more and from time to time we have rogue, poor quality dashboards absorbing a big % of our search capacity and the occasional scheduled PDF reaching our senior management with a graph hitting the max.
Lowering all the roles concurrent search jobs limit doesn't really provide us the power we need as it would effectively promote resource waste as from time to time a role searches would be queued up even in situations where other roles searches are having minimal activity
In cases of extreme concurrency, ad hoc searches from a role using a too high ratio between its minimum ensured 'capacity' and the search jobs limit should be killed to allow the minimum ensured capacity to be met across all roles
Will raise this as an Enterprise enhancement case, but would still like to read some thoughts.
I suggest you stand up a separate search head for use only for vital business-critical dashboards/searches.