Installation

Upgrade with Search and Replication factors not met

scottfrandsen
Explorer

I'm getting ready to upgrade my cluster from 6.4.2 to 6.6.12 to 7.3.4. However, my Replication Factor and Search Factor are net met. Is it OK to proceed with the upgrade anyway? If not, what can be done to fix it? I've searched for resolutions but have not found anything that applies or works yet.

RF=2
SF=2
I have a master, 2 indexers/peers and 2 search heads
Fixup Tasks in Progress=0
Fixup Tasks Pending=12737
Excess Buckets=0

Most fixup statuses are "cannot fix up search factor as bucket is not serviceable" or "Missing enough suitable candidates to create searchable copy in order to meet replication policy. Missing={ default:1 }"

Labels (3)
0 Karma
1 Solution

scottfrandsen
Explorer

I found some indexes with spaces in the names. After replacing the spaces with underscores and making needed changes in index.conf and inputs.conf, RF and SF are now met.

View solution in original post

0 Karma

scottfrandsen
Explorer

I found some indexes with spaces in the names. After replacing the spaces with underscores and making needed changes in index.conf and inputs.conf, RF and SF are now met.

0 Karma

thambisetty
SplunkTrust
SplunkTrust

@scottfrandsen 

May I know what exactly you have done to fix the issue?

————————————
If this helps, give a like below.
0 Karma

scottfrandsen
Explorer

@thambisettyI removed the spaces from all the index names and directories. I did this in the indexes.conf file and the associated index directories under $SPLUNK_DB.

0 Karma

ivanreis
Builder

Please do run migrate the splunk version before you fix it.
Also check this splunk answer to get more information about this error -> https://answers.splunk.com/answers/406085/why-am-i-getting-bucket-not-serviceable-errors-on.html

In order to fix this issue, please restart the indexer cluster using the command ./splunk rolling-restart cluster-peers. The process will make the transition to hot-warm buckets and it will try to fix the issues on buckets. Also restart the splunk service on Cluster master as well.

0 Karma

scottfrandsen
Explorer

Your reply is much appreciated. Please clarify the statement "Please do run migrate the splunk version before you fix it." Should this be "do not run?"
I have restarted the master (splunk stop, splunk start) and then ran the rolling-restart after a while. A couple hours later the pending fixup tasks has gone up to 12991. About 44 of the 132 indexes meet SF and RF, strangely these are somewhat grouped alphabetically - index names that start with A through FF all have good SF/RF, except for 5; the rest of the indexes FE through W (except for main) do not meet SF/RF.
Firewall is good, netstat shows established connections from peers to master on port 8089 and both directions between the two peers on 8080.

0 Karma

scottfrandsen
Explorer

Thanks again, but
-there are no excess buckets
-v6.4.2 does not have options for fixing buckets as described

I'm looking into some support options from Splunk for an answer.

0 Karma

ivanreis
Builder

sorry, I missed the "not" word, so please do not run the upgrade until this issue is fixed.
- remove all the excessive buckets from this link ->
https://docs.splunk.com/Documentation/Splunk/7.1.2/Indexer/Removeextrabucketcopies

After you finished the process, restart the indexer cluster and validate if the SF/RF are matching.

for further information, please check this troubleshooting guide
https://docs.splunk.com/Documentation/Splunk/8.0.2/Indexer/Bucketreplicationissues

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...