4 node SHC running 6.4.1, Linux 64-bit. Very lightly used (still working on migrating from the old system).
1 user has 3 searches that fire at midnight, and 3 more that fire hourly. The searches run quickly, under 1 second in every case I've tested. And we've put in a window for all of them: 5 minutes for the hourly, and 3 hours for the nightly.
All throughout the day I see messages from SHCMaster:
07-01-2016 10:00:26.234 -0400 ERROR SHCMaster - Search not executed: Your maximum number of concurrent searches has been reached. usage=12 quota=12 user=jsmith. for search: nobody;docker_search;Agency Notice Worker Notification Error
They were logging 100s of times per hour. And reporting usage=53, which was insane. No, we didn't have 50 searches running. I performed a rolling restart of the cluster, and it settled down for a few hours. But then they came back. Not as frequent, but still there. Most concerning of all, it is causing missed runs of the scheduled searches.
The DMC Search Activity Page and Scheduler Activity page show between 0 and 3 active searches at the same time SHCMaster is reporting this problem.
Apparently this bug introduced in 6.3 is still active:
https://answers.splunk.com/answers/329518/why-do-scheduled-searches-randomly-stop-running-in.html
I used the workaround listed there (in limits.conf) and the monotonically increasing usage stat for the user has returned to 0.
Apparently this bug introduced in 6.3 is still active:
https://answers.splunk.com/answers/329518/why-do-scheduled-searches-randomly-stop-running-in.html
I used the workaround listed there (in limits.conf) and the monotonically increasing usage stat for the user has returned to 0.