Deployment Architecture

After upgrading to Splunk 6.6.0, why are 5 indexes marked as not searchable?

sylim_splunk
Splunk Employee
Splunk Employee

After an upgrade of our small cluster (two indexers, one search head, RF=SF=2) to 6.6.0, 5 indexes are marked as not searchable.
There are 0 "fix up" type tasks. I have tried gracefully restarting both indexers again (one at a time) and no luck there. Any help is appreciated. Thanks.alt text

no Fix-up tasks
alt text

0 Karma
1 Solution

sylim_splunk
Splunk Employee
Splunk Employee

OK the situation appears OK now. This may not be a commonly happening case. I ran the commands below to find the buckets status ;

i) REST call https://ClusterMaster:8089/services/cluster/master/buckets?filter=replication_count%3C2&filter=froze...
List of buckets with "frozen=false" & "replication_count < 2" (RF=SF=2 deployment)

ii) After I get the list of buckets from above, checked individual buckets using https://ClusterMaster:8089/services/cluster/master/buckets/{BID OF BUCKET IN ABOVE LIST}

Strangely there were some buckets without PEER information, which appears that the cluster master has some buckets information that its peers do not have.
Then ran remove_all which worked around the issue :

curl -k -u admin:password https://:8089/services/cluster/master/buckets//remove_all -X POST

View solution in original post

sylim_splunk
Splunk Employee
Splunk Employee

OK the situation appears OK now. This may not be a commonly happening case. I ran the commands below to find the buckets status ;

i) REST call https://ClusterMaster:8089/services/cluster/master/buckets?filter=replication_count%3C2&filter=froze...
List of buckets with "frozen=false" & "replication_count < 2" (RF=SF=2 deployment)

ii) After I get the list of buckets from above, checked individual buckets using https://ClusterMaster:8089/services/cluster/master/buckets/{BID OF BUCKET IN ABOVE LIST}

Strangely there were some buckets without PEER information, which appears that the cluster master has some buckets information that its peers do not have.
Then ran remove_all which worked around the issue :

curl -k -u admin:password https://:8089/services/cluster/master/buckets//remove_all -X POST

Get Updates on the Splunk Community!

Take the 2021 Splunk Career Survey for $50 in Amazon Cash

Help us learn about how Splunk has impacted your career by taking the 2021 Splunk Career Survey. Last year’s ...

Using Machine Learning for Hunting Security Threats

WATCH NOW Seeing the exponential hike in global cyber threat spectrum, organizations are now striving more for ...

Observability Newsletter Highlights | March 2023

 March 2023 | Check out the latest and greatestSplunk APM's New Tag Filter ExperienceSplunk APM has updated ...