I've configured a dev Splunk 6.4 env, and noticed that my Distributed Management Console is getting "max concurrent searches reached" messages. Sooooo, since the DMC isn't part of the Search Head Cluster and can't benefit from the improved scheduling, and since the DMC is probably going to add more and more searches over time as it develops... I'm looking for idea on how to prevent this situation, since the DMC is supposed to be runnable from a "smallish" VM (6 vcpu's).
DMC fires searches only when you open a page or a dashboard. We try to limit the total number of searches on each page/dashboard.
Reaching max concurrent searches wouldn't prevent DMC from working, the searches are just in the waiting queue and will run as soon as other searches finished.
True, but if it's a trouble-shooting tool, then I need the most recent information, and I might not have it. Seems to be an area that was overlooked, in all honesty... more and more searches are going to be added to the DMC, which is going to require it to be a bigger server.
I agree with @ykou. I also run my DMC on a tiny VM and ALWAYS run out of concurrent searches when I load the DMC's default dashboard. I trust that they'll just queue up and I don't mind waiting a moment so I ignore it.
It's merely informational to let you know that the other searches are going to queue since you hit the max. I only really see it on that first dashboard since so much happens on there.
If its truly an issue, you could increase the DMC cpu's or modify the per cpu limits.conf setting. I'm not sure its worth it though.