Splunk runs search jobs based on how many CPUs your Splunk host has. So you'll see this message if you have too many inline searches running on your dashboard, or backgrounded searches running in your system. You can always go to search jobs (link in upper right hand corner of Splunk Web) to see what you're running and cancel rogue jobs on your system.
If you're building dashboards and seeing this error, you may want to reconsider how you've built your dashboard. Make some of the searches scheduled saved searches. Schedule them to run as often as you like to keep the data in your dashboard fresh but not so often that you're overloading your system running searches. If you need absolutely fresh data in your dashboard, use inline searches but use them sparingly and only for fast running searches. For long running, expensive searches, use scheduled saved searches.
To fix the problem more generally, you can tweak some configuration nobs in limits.conf as follows:
While increasing this could fix the dashboard issue where searches are fairly cheap to run, this could lead to performance degradation if you've scheduled a large number of expensive searches.
This is the number of retries the back end will attempt before throwing the quota/limit error. The back end here does an exponential back-off starting with 100ms and doubling that every time it retries.
The initial sleep time for retries. Instead of increasing max_searches_per_cpu you can set the dispatch_quota_retry to 10 which will instruct the back-end to retry dispatching a particular search for about 100 seconds before throwing the quota/limit error.
Um, ok, good grief.
This search is hit when you install splunk 4.0.9 on a windows system.
Seriously, how can you guys release an app enabled by default and featured prominently -- the windows app -- and then manage to miss the warning when you open the very first page for that app?!
I got a t-shirt that "Splunk for the Win(dows)!" -- I think it must have been the extent of the attention you gave to the actual app.
I understand your frustration here -- we really should have caught this earlier, and perhaps we were too conservative with our limits.
There is no reason your dashboard shouldn't load without errors. The short term solution will be to increase the default max_searches_per_cpu and base_max_searches to 4. Since it involves making a significant change in the way splunk uses cpu resources, this will happen in the next minor release, 4.1, coming soon.
** Note that if you increase your max number of concurrent searches too much, you'll run out of memory on the server. **
Unfortunately for dashboards, Splunk attempts to queue all of the charts at the same time, if it can't, it leaves some of the charts (at random) blank. So when you design a dashboard, keep in mind the max number of concurrent sessions you can get away with, and never put more charts on your dashboard than that value.
If you have several people hit the same dashboard at the same time, Splunk will queue their searches, but still only paint each user's dashboard only with the number of charts equivalent to your max number of concurrent searches.
We need a better solution here to answer this question. Such as how to limit the number of inline searches but still display the charts/data near up to date.