Deployment Architecture

How come I'm receiving the following prompt?: "Search Head Clustering Service Not Ready"

vulnfree
Explorer

I am in process of clustering my search heads, but I am receiving the following prompt in the web interface when I click Settings->Search head clustering.

Please wait, the status of your search head cluster is not ready.

Service ready flag: false

Rolling restart in progress: false
0 Karma
1 Solution

muralikoppula
Communicator

@vulnfree
You've to provide more details about this issue. Check splunkd logs and mongod.log

There are different scenarious which could cause this type of issues:
- KVstore might not be working properly
- Check SSL certificates
- Check your server resources(make sure some times too many data models accelerations could cause this type of issue so try to disable unnecessary data models)
- Are you seeing any skipped searches on search heads?

If this is a non production environment try to follow the below steps and if it is a production please be careful before running the commands . You should have to know what you're doing .

  1. Stop all Search Head Cluster members and copy the kvstore cd $SPLUNK_HOME/var/lib/splunk/kvstore tar cvfz kvstore-.tar.gz move to a safe place.
  2. Clean raft and mongod folders

$SPLUNK_HOME/bin/splunk clean kvstore --cluster
$SPLUNK_HOME/bin/splunk clean raft

  1. Verify all members have replication_factor = 3

$SPLUNK_HOME/bin/splunk btool server list shclustering | grep replication_factor

  1. Start all members

    $SPLUNK_HOME/bin/splunk start

  2. Initialize all members

$SPLUNK_HOME/bin/splunk init shcluster-config -auth admin:changed -mgmt_uri https://sh1.example.com:8089 -replication_port 1234 -replication_factor 3 -conf_deploy_fetch_url https://:8089 -secret mykey -shcluster_label shc01

  1. Verify KVstore

$SPLUNK_HOME/bin/splunk show kvstore-status

  1. Resync stale KV store members (https://docs.splunk.com/Documentation/Splunk/7.2.1/Admin/ResyncKVstore) > $SPLUNK_HOME/bin/splunk resync kvstore

View solution in original post

0 Karma

vulnfree
Explorer

@muralikoppula

Do I add the deployment server to the "splunk bootstrap shcluster-captain -servers_list" command?

https://docs.splunk.com/Documentation/Splunk/7.2.3/DistSearch/SHCdeploymentoverview

0 Karma

muralikoppula
Communicator

@vulnfree
You've to provide more details about this issue. Check splunkd logs and mongod.log

There are different scenarious which could cause this type of issues:
- KVstore might not be working properly
- Check SSL certificates
- Check your server resources(make sure some times too many data models accelerations could cause this type of issue so try to disable unnecessary data models)
- Are you seeing any skipped searches on search heads?

If this is a non production environment try to follow the below steps and if it is a production please be careful before running the commands . You should have to know what you're doing .

  1. Stop all Search Head Cluster members and copy the kvstore cd $SPLUNK_HOME/var/lib/splunk/kvstore tar cvfz kvstore-.tar.gz move to a safe place.
  2. Clean raft and mongod folders

$SPLUNK_HOME/bin/splunk clean kvstore --cluster
$SPLUNK_HOME/bin/splunk clean raft

  1. Verify all members have replication_factor = 3

$SPLUNK_HOME/bin/splunk btool server list shclustering | grep replication_factor

  1. Start all members

    $SPLUNK_HOME/bin/splunk start

  2. Initialize all members

$SPLUNK_HOME/bin/splunk init shcluster-config -auth admin:changed -mgmt_uri https://sh1.example.com:8089 -replication_port 1234 -replication_factor 3 -conf_deploy_fetch_url https://:8089 -secret mykey -shcluster_label shc01

  1. Verify KVstore

$SPLUNK_HOME/bin/splunk show kvstore-status

  1. Resync stale KV store members (https://docs.splunk.com/Documentation/Splunk/7.2.1/Admin/ResyncKVstore) > $SPLUNK_HOME/bin/splunk resync kvstore

View solution in original post

0 Karma
.conf21 Now Fully Virtual!
Register for FREE Today!

We've made .conf21 totally virtual and totally FREE! Our completely online experience will run from 10/19 through 10/20 with some additional events, too!