Deployment Architecture

Why am I only seeing results from one search-peer?

duke_splunk_adm
Engager

I'm trying to confirm that replication and searching can happen on one NIC while ingesting happens over a different NIC.
I have the following simple test setup:

3 indexes in a cluster, each with 2 NICs...
1 master
1 search-head
1 forwarder sending to all three indexers

The search-head is connected to the master and in settings > distributed search > search-peers, or on the command line I see all three indexers in the cluster:

splunk list search-server
Server at URI "dsplunk-index-test-01.oit.duke.edu:8089" with status as "Up"
Server at URI "splunk-index-test-01-private.oit.duke.edu:8089" with status as "Up"
Server at URI "splunk-index-test-02-private.oit.duke.edu:8089" with status as "Up"
Server at URI "splunk-index-test-03-private.oit.duke.edu:8089" with status as "Up"

But I only see results from one indexer when I search from the web GUI on the search-head, or from its command line.

This is my command line search: splunk search "index=* | chart count by splunk_server"
I'm using the same search in the web GUI, just everything inside the "".

If I run the command-line search on the indexers individually I get results from the specific search-peer.

If I run the command-line search on the master, I get results from all three search-peers.

   splunk_server     count      
-------------------- -----                                                                   
splunk-index-test-01   57
splunk-index-test-02   39
splunk-index-test-03   456          

If I run the command-line search from the search-head I get one result.

      splunk_server     count      
    -------------------- -----                                                                   
    splunk-index-test-01   57

If I had configured the search-head incorrectly to the master, I wouldn't see the search peers in the list search-server command results. Or I wouldn't see any results at all. As it is, it makes no sense that one of the 3 indexers shows and the other two don't. Firewalls are all open to the search-head for both NICs on all 3 indexers. I can telnet to port 8089 from the search-head to both NICs on all 3 boxes.

Here's the snippet from server.conf on the search-head:

[clustering]
master_uri = https://splunk-master-test-01.oit.duke.edu:8089
mode = searchhead
pass4SymmKey = $1$7/FK0zLe7w3j3t4lkTuxrXaNBB9vpccQ==

And from the master:

    [clustering]            
    cluster_label = oit                                        
    mode = master       
    pass4SymmKey = $1$bYZ2q5Vu//5VNuiwljjQlH9xYhGBKA==
    replication_factor = 2         
    search_factor = 1   

(pass4SymmKeys have been changed)

show cluster-status shows that everything is up and searchable, all green lights.
How do I get my search-head to believe that it actually should be able to see the other search-peers?

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...