Splunk Search

Cannot determine a latest common bundle, search may be blocked

GenRockeR
Explorer

Hi guys.

Why Splunk have many errors in log file and what can I do in this situation?

05-17-2019 18:58:08.036 +0300 WARN DistributedPeerManager - Cannot determine a latest common bundle, search may be blocked
05-17-2019 18:58:08.036 +0300 WARN DistributedPeerManager - Cannot determine a latest common bundle, search may be blocked
05-17-2019 18:58:08.037 +0300 WARN DistributedPeerManager - Cannot determine a latest common bundle, search may be blocked

Tags (1)
0 Karma
1 Solution

codebuilder
Influencer

I've encountered this before, especially on new SHC builds.
You'll need to perform a manual / destructive resync in order to get them properly clustered again.

splunk resync shcluster-replicated-config

https://docs.splunk.com/Documentation/Splunk/7.2.6/DistSearch/HowconfrepoworksinSHC#Perform_a_manual...

If that does not work you will likely need to bootstrap a new captain, then add (init) the other members back in.

splunk bootstrap shcluster-captain -servers_list "<URI>:<management_port>,<URI>:<management_port>,..." -auth <username>:<password>
----
An upvote would be appreciated and Accept Solution if it helps!

View solution in original post

mhouse333
Loves-to-Learn Lots

It is possible that there are other things going on that is causing this error than what is stated above.  Since I identified a unique root cause I wanted to share with all.  The last bullet below was what worked for me but the below bullets represents a summary of recommended steps to get to root cause for this.

  • First verify that the size of the bundle being sent from SH is not greater than the bundle size limit setting on the SH (maxBundleSize distSearch.conf) or the Indexer (max_content_lengh server.conf)
  • Then check for permissions/ownership errors on all the instanced by running “ls -lahR /opt/spunk | grep root”
  • Then run ./splunk btool check
  • Then check the CM bundle details and compare if the latest active bundle in the peers is same as the CM.
  • Then run the top command to see if there are any resources using a significant percentage of CPU utilization over Splunk.  A new application could have been introduced that is preventing writes from taking place over a long period of time due to files being locked by other application.  This can be further verified by:
    • Run the following on each indexer “sudo tcpdump <ipaddressofsourceSH>” then attempt to run your search from the SH and see if you see the commands coming over.
    • If fails that there is an application that on in your environment that is preventing Splunk from doing what it need to do and you need to apply for an Splunk exceptions for the recently introduced application.
0 Karma

sylax
Explorer

If this error is been generated on the cluster master. Go to Settings > Distributed Peers and verify the health of the indexers, it's possible that the remote credentials have expired or has changed. Click on each of the peer nodes and re-authenticate. This should fix the issue.

0 Karma

GenRockeR
Explorer

/opt/splunk/bin/splunk bootstrap shcluster-captain -servers_list "https://splunk-sh11:8089,https://splunk-sh21:8089" -auth admin:XXXXXX

server=https://splunk-sh11:8089, error=This node seems to have already joined another cluster with below members: 'https://splunk-sh11:8089,https://splunk-sh21:8089'.

First remove the member from the old cluster. Then run 'splunk clean raft' on the member to reuse it in a new cluster; server=https://splunk-sh21:8089, error=This node seems to have already joined another cluster with below members: 'https://splunk-sh11:8089,https://splunk-sh21:8089'.

First remove the member from the old cluster. Then run 'splunk clean raft' on the member to reuse it in a new cluster;

0 Karma

GenRockeR
Explorer

Hello again.

If I already have a configured cluster, will the initial bootstrap of the cluster master result in the loss of all settings and users?

codebuilder
Influencer

I've encountered this before, especially on new SHC builds.
You'll need to perform a manual / destructive resync in order to get them properly clustered again.

splunk resync shcluster-replicated-config

https://docs.splunk.com/Documentation/Splunk/7.2.6/DistSearch/HowconfrepoworksinSHC#Perform_a_manual...

If that does not work you will likely need to bootstrap a new captain, then add (init) the other members back in.

splunk bootstrap shcluster-captain -servers_list "<URI>:<management_port>,<URI>:<management_port>,..." -auth <username>:<password>
----
An upvote would be appreciated and Accept Solution if it helps!

GenRockeR
Explorer

[root@splunk-sh21 certs]# /opt/splunk/bin/splunk resync shcluster-replicated-config
The member has been synced to the latest replicated configurations on the captain.

But I've same troubles
05-20-2019 10:57:18.405 +0300 WARN DistributedPeerManager - Cannot determine a latest common bundle, search may be blocked

0 Karma

codebuilder
Influencer

I think your best bet is to rebuild the SHC altogether. Remove all members from the cluster and cycle Splunk. Then bootstrap one of the nodes as captain with the command I posted previously.

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma

RishiMandal
Explorer

can you login to the DS, push the latest bundle and do splunk apply cluster-bundle from the Cluster master to all your peers. Do paste the errors which you get post doing this..
Always try to check the CM bundle details and compare if the latest active bundle in the peers is same as the CM

0 Karma
Get Updates on the Splunk Community!

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud  In today’s fast-paced digital ...

Observability protocols to know about

Observability protocols define the specifications or formats for collecting, encoding, transporting, and ...

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...