Hello Splunkers,
We have a search head cluster having 3 search heads from those 3 search heads only one search head taking more cpu utilization compare with other two.
For example :- my search head 1 is taking 70% load and reaming 2 are taking 45 and 40.
When I checked the Latency it is 30 sec which is taking 70% load and reaming 2 are 3 sec and 2 sec.
So pls help me Anyone to reduce the Latency of the Search Head in the SH-cluster.
@kamlesh_vaghela , @niketnilay, @somesoni2 , @mayurr98
In a clustered search head environment one of your search heads takes on the additional role of captain. The captain is responsible for keeping the cluster in sync and also scheduling jobs and replication, while also acting as a "normal" search head. The captain will always utilize more resources than the other nodes in the cluster.
You can determine which node is the captain through the web UI or by using the following command from the CLI on any of the SHC members:
splunk show shcluster-status
I suspect the output will correlate with your node that is the most busy.
When you say "70% load", is it user load or system load ? Can you please provide more info on "30 sec latency", what type of latency is this For example: Is this schedule search execution latency ?
When you say "Taking 70%" of the load, do you mean the load on the system is 70% or 70% of the searches are being executed by SH1?
Is the high-load related to the Cluster or KVStore captain?
Do you see iowait on SH1 or is it only load?