I run the data rebalance (also tried the REST on the master), for primary rebalancing. As described here: http://docs.splunk.com/Documentation/Splunk/7.1.2/Indexer/Rebalancethecluster
But nothing happens. The data is balanced alright, but in my 2 indexer node cluster, only one returns my results for all data (yes it is over several buckets)
The issue here is that I had only data flowing to one indexer. Then I got the other operating, so all db_ buckets on 01 and all rb_ buckets on 02.
I thought that primary rebalancing would make it so that it would balance primaries between indexer 01 and 02 ? So I can get both indexers to return search data to the search head.
I get this in splunkd on the master:
08-03-2018 09:40:16.097 +0200 INFO CMMaster - scheduled rebalance primaries
And then nothing happens.
Am I missing something here ?
This problem can happen if the buckets on the original indexer were created on a lone indexer that was not part of a cluster. (You can have a cluster of one, which avoids this issue, but most people don't set that up originally.)
Unclustered buckets have a slightly different naming convention to clustered buckets, and therefore aren't treated the same way during rebalancing. (I'm actually surprised there are rb_buckets for those on the other indexer, which makes me wonder what else is going on.)
Here's the naming conventions from this page ... https://docs.splunk.com/Documentation/Splunk/7.0.4/Indexer/HowSplunkstoresindexes
For non-clustered buckets: db_<newest_time>_<oldest_time>_<localid> For clustered original bucket copies: db_<newest_time>_<oldest_time>_<localid>_<guid> For clustered replicated bucket copies: rb_<newest_time>_<oldest_time>_<localid>_<guid>
So, first check to see if the older buckets match the non-clustered naming convention, in which case they wouldn't be subject to the clustered control.
If they ARE in the clustered naming convention, and if you have them set to 2/2, there should be searchable copies on both indexers, and the primary rebalance would then not really do anything, because all of them would be searchable and you'd only have to talk to one indexer to get that data. So, in that case, try setting it back to 1/2 and then doing your primary rebalance, and see if that fixes it.
Well, it was a part of the cluster, and has the GUID in the bucket directory names. So that was not the issue.
The cluster was created with two indexers. One indexer (lets call that indexer 2) needed to be reinstalled, so all data was sent to indexer 1.
After the reinstall of indexer 2, which took several days, all the db buckets on indexer 1 made an rb on indexer 2. So the clustering works fine.
But when searching that data only indexer 1 returned results because all primaries were on indexer 1. Rebalance primaries, I would expect at least some of the rb buckets to be chosen as primary. But not even one bucket was chosen.
Now that the cluster is operating on two indexers, moving forward, data is evenly distributed between the indexers, so both create db buckets. And hence both indexers have about half primaries.
So it is not a problem for new data. I was just a bit dissapointed that the rebalance primaries on the old data did not work.
What is the replication factor and search factor that you are using ?
The doc says "To achieve primary rebalancing, the master reassigns the primary state from existing bucket copies to searchable copies of the same buckets on other peers" for this condition you need to have searchable copies on other indexer.
Can you check this?
It is 2 for both. And moving forward I get balanced primaries, as data is pouring in to both indexers.
But I thought, that after having a lot of data going to indexer 01, and then adding my 02, and after replication and search factor are met, that I could give it a "primary rebalance" so that some of the rb buckets on 02 could help do some work.
Moving forward I have balanced data and primaries as both indexers now gets data directly from the forwarders, and all is good. Just wondering why the balance primaries didn't do anything.