Recently started encountering issues where one node of a 4 node search head cluster starts reporting:
SHPMaster - Search not executed: Your maximum number of concurrent searches has been reached. usage=93 quota=40 user=my.username
The strange thing is that this is only happening on one node. The Activity->Jobs drop-down doesn't reveal anywhere close to the number of running jobs ... and subsequently, bouncing splunkd on the reporting member resolves the issue ... ( and it gradually reappears over the next 5-7 days )
This environment was recently upgraded to 6.3.2.
I guess the question then would be, 'why is this happening only on a single node?'
The node reporting this issue must be the the captain node at the time, as the channel reporting the message is SHPMaster. This message is related to a known search head clustering bug introduced in 6.3. Please refer to this post for details:
https://answers.splunk.com/answers/337598/search-head-cluster-pre-63-we-could-run-more-numbe-2.html