Splunk Search

why Searches running on only one Indexer ?


Hi All ,

So I have two indexers in a cluster with CM
Two SH's in a cluster with a deployer
SH cluster is connected to CM .

I see Indexer's having high CPU alerts.
Sometimes Indexer01 have 100% CPU alerts
Sometime IDX02 but not both collectively.

So I went to DMC , saw that searches are causing this issue.

So i opened two indexer's via putty (command line) and ran the top command to view the CPU utilization.

whenever i see a Indexer hitting 100%
I opened the dispatch folder at
and ran
find . -name "alive.token"

What i found out is , whenever the acceleration searches are running the cpu is hitting 100% on that particular indexer.

My question is :

1.Why my acceleration searches are only running on only one of two indexer's why not both collectively?

If i see they are running on IDX01 , the IDX02 dispatch directory doesnot have any alive searches or cpu usage is very low.

If i see they are running on IDX02 ,the IDX01dispatch directory doesnot have any alive searches or cpu usage is very low.

2.I am trying to draft a search to count the number of jobs ran on any Indexer.

I took the search_id from the dispatch folder and searched in splunk.
I got events from _audit and _internal , the problem is I dont see any field saying on which indexer the search job has run.
[as the _audit and _internal indexes are replicated among the cluster , i cant differentiate the internal logs of the indexer's]

Please give your thoughts.

P.S : it is a multisite cluster .
SH01 and IDX01 are on site1
SH02 and IDX02 are on site2

I thought search affinity is the problem.
But as per search affinity , if the search is triggered by SH01 it will run on IDX01 mostly.
But here i see search jobs are triggered by both SH01 and SH02 [i have known this from the search jobs naming convention in the dispatch folder ]and running only on either of the indexers. (edited)

Tags (1)
0 Karma


Hi ramarcsight,

Very old thread but did you manage to resolve this issue. Would be good if you are able to respond with any helpful information.

0 Karma


A 2-node search cluster cannot survive any node failures, in fact 3 is the minimum cluster size for a search head cluster.

The audit and internal logs should both have a host field which advises where the logs came from.

If your running a search head cluster, than 1 of the 2 nodes will run the acceleration job, and if you have search head 1 running the job then it will run against indexer IDX01 (site1) and search head 2 (IDX02) if the bucket is available in both sites.

If for any reason the bucket is only available on IDX02 and the search is running on SH01, it will query IDX02 as the affinity is a preference.

It is generally recommended to use site0 in a search head cluster, perhaps have a read of Deploy a search head cluster in a multisite environment

And Search head clustering architecture in particular
"Captain election process has deployment implications"

 A cluster must consist of a minimum of three members. A two-member cluster cannot tolerate any node failure. Failure of either member will prevent the cluster from electing a captain and continuing to function. Captain election requires majority (51%) assent of all members, which, in the case of a two-member cluster, means that both nodes must be running. You therefore forfeit the high availability benefits of a search head cluster if you limit it to two members.
0 Karma
Get Updates on the Splunk Community!

Five Subtly Different Ways of Adding Manual Instrumentation in Java

You can find the code of this example on GitHub here. Please feel free to star the repository to keep in ...

New Splunk APM Enhancements Help Troubleshoot Your MySQL and NoSQL Databases Faster

Splunk Observability has two new enhancements to make it quicker and easier to troubleshoot slow or frequently ...

How to Troubleshoot our Splunk HEC Endpoint

This blog post is part of an ongoing series on OpenTelemetry. In this blog post, we will explore the best way ...