I'm reading the docs for search-head-clustering, and trying to determine exactly how a job is assigned. The doc says that each job is assigned to the member currently with the least search load. But, it also says that captain has no insight into the CPU load. So, what is the captain looking at?
The captain assumes that each member has the same resources (cpu, etc.). In assigning scheduled search jobs, it attempts to distribute jobs evenly among the members. So, it assigns each job to the member currently running the least number of jobs.
Thanks. I'd really like to see Splunk make the resource monitoring more intelligent. Most customers purchased large servers with lots of cpu's to handle the load (and to avoid SHP), and now we get this - which looks nice - but it seems that we are still limited to purchasing large servers. Let me add on and scale it horizontally - without needing to purchase extra large servers. Very frustrating.
if i understant very well your preoccupation, i think that:
- The capitain is a only member of cluster which is connected to Splunk (Deployer), it is a central distribution
the capitain schedules and manages searches
"The -target parameter specifies the URI and management port for any member of the cluster, for example, https://10.0.1.14:8089. You specify only one cluster member but the deployer pushes to all members. This parameter is required."
So the captain isn't not even doing that.
i have established the required 3 SH members but cannot see a distribution of searches - each member appears to be doing all the searches. Maybe i'm just not looking in the right place - is there a way to monitor this closer?