So I have two indexers in a cluster with CM
Two SH's in a cluster with a deployer
SH cluster is connected to CM .
I see Indexer's having high CPU alerts.
Sometimes Indexer01 have 100% CPU alerts
Sometime IDX02 but not both collectively.
So I went to DMC , saw that searches are causing this issue.
So i opened two indexer's via putty (command line) and ran the top command to view the CPU utilization.
whenever i see a Indexer hitting 100%
I opened the dispatch folder at /opt/splunk/var/run/splunk/dispatch
and ran find . -name "alive.token"
What i found out is , whenever the acceleration searches are running the cpu is hitting 100% on that particular indexer.
My question is :
1.Why my acceleration searches are only running on only one of two indexer's why not both collectively?
If i see they are running on IDX01 , the IDX02 dispatch directory doesnot have any alive searches or cpu usage is very low.
If i see they are running on IDX02 ,the IDX01dispatch directory doesnot have any alive searches or cpu usage is very low.
2.I am trying to draft a search to count the number of jobs ran on any Indexer.
I took the search_id from the dispatch folder and searched in splunk.
I got events from _audit and _internal , the problem is I dont see any field saying on which indexer the search job has run.
[as the _audit and _internal indexes are replicated among the cluster , i cant differentiate the internal logs of the indexer's]
Please give your thoughts.
P.S : it is a multisite cluster .
SH01 and IDX01 are on site1
SH02 and IDX02 are on site2
I thought search affinity is the problem.
But as per search affinity , if the search is triggered by SH01 it will run on IDX01 mostly.
But here i see search jobs are triggered by both SH01 and SH02 [i have known this from the search jobs naming convention in the dispatch folder ]and running only on either of the indexers. (edited)
A 2-node search cluster cannot survive any node failures, in fact 3 is the minimum cluster size for a search head cluster.
The audit and internal logs should both have a host field which advises where the logs came from.
If your running a search head cluster, than 1 of the 2 nodes will run the acceleration job, and if you have search head 1 running the job then it will run against indexer IDX01 (site1) and search head 2 (IDX02) if the bucket is available in both sites.
If for any reason the bucket is only available on IDX02 and the search is running on SH01, it will query IDX02 as the affinity is a preference.
A cluster must consist of a minimum of three members. A two-member cluster cannot tolerate any node failure. Failure of either member will prevent the cluster from electing a captain and continuing to function. Captain election requires majority (51%) assent of all members, which, in the case of a two-member cluster, means that both nodes must be running. You therefore forfeit the high availability benefits of a search head cluster if you limit it to two members.
Alerts for Splunk Admins https://splunkbase.splunk.com/app/3796/ Version Control for Splunk https://splunkbase.splunk.com/app/4355/