Deployment Architecture

Why am I only seeing results from one search-peer?

duke_splunk_adm
Engager

I'm trying to confirm that replication and searching can happen on one NIC while ingesting happens over a different NIC.
I have the following simple test setup:

3 indexes in a cluster, each with 2 NICs...
1 master
1 search-head
1 forwarder sending to all three indexers

The search-head is connected to the master and in settings > distributed search > search-peers, or on the command line I see all three indexers in the cluster:

splunk list search-server
Server at URI "dsplunk-index-test-01.oit.duke.edu:8089" with status as "Up"
Server at URI "splunk-index-test-01-private.oit.duke.edu:8089" with status as "Up"
Server at URI "splunk-index-test-02-private.oit.duke.edu:8089" with status as "Up"
Server at URI "splunk-index-test-03-private.oit.duke.edu:8089" with status as "Up"

But I only see results from one indexer when I search from the web GUI on the search-head, or from its command line.

This is my command line search: splunk search "index=* | chart count by splunk_server"
I'm using the same search in the web GUI, just everything inside the "".

If I run the command-line search on the indexers individually I get results from the specific search-peer.

If I run the command-line search on the master, I get results from all three search-peers.

   splunk_server     count      
-------------------- -----                                                                   
splunk-index-test-01   57
splunk-index-test-02   39
splunk-index-test-03   456          

If I run the command-line search from the search-head I get one result.

      splunk_server     count      
    -------------------- -----                                                                   
    splunk-index-test-01   57

If I had configured the search-head incorrectly to the master, I wouldn't see the search peers in the list search-server command results. Or I wouldn't see any results at all. As it is, it makes no sense that one of the 3 indexers shows and the other two don't. Firewalls are all open to the search-head for both NICs on all 3 indexers. I can telnet to port 8089 from the search-head to both NICs on all 3 boxes.

Here's the snippet from server.conf on the search-head:

[clustering]
master_uri = https://splunk-master-test-01.oit.duke.edu:8089
mode = searchhead
pass4SymmKey = $1$7/FK0zLe7w3j3t4lkTuxrXaNBB9vpccQ==

And from the master:

    [clustering]            
    cluster_label = oit                                        
    mode = master       
    pass4SymmKey = $1$bYZ2q5Vu//5VNuiwljjQlH9xYhGBKA==
    replication_factor = 2         
    search_factor = 1   

(pass4SymmKeys have been changed)

show cluster-status shows that everything is up and searchable, all green lights.
How do I get my search-head to believe that it actually should be able to see the other search-peers?

0 Karma

duke_splunk_adm
Engager

The search.log shows that it is kind of aware of the other two search-peers:

10-12-2017 16:42:57.557 INFO DistributedSearchResultCollectionManager - Connecting to peer splunk-index-test-02 connectAll 0 connectToSpecificPeer 1
10-12-2017 16:42:57.557 INFO DistributedSearchResultCollectionManager - Connecting to peer splunk-index-test-03 connectAll 0 connectToSpecificPeer 1

and

10-12-2017 16:42:57.563 INFO DistributedSearchResultCollectionManager - Successfully created search result collector for peer=splunk-index-test-02 in 0.003000 seconds
10-12-2017 16:42:57.565 INFO DistributedSearchResultCollectionManager - Successfully created search result collector for peer=splunk-index-test-03 in 0.003000 seconds

I'm not sure how helpful this is, given that it says they don't exist when specified directly.

0 Karma
Get Updates on the Splunk Community!

Observe and Secure All Apps with Splunk

  Join Us for Our Next Tech Talk: Observe and Secure All Apps with SplunkAs organizations continue to innovate ...

Splunk Decoded: Business Transactions vs Business IQ

It’s the morning of Black Friday, and your e-commerce site is handling 10x normal traffic. Orders are flowing, ...

Fastest way to demo Observability

I’ve been having a lot of fun learning about Kubernetes and Observability. I set myself an interesting ...