Deployment Architecture

Multisite Cluster staggered deployment

hunderliggur
Path Finder

We are deploying a new instance of Splunk Enterprise and have decided on a multisite cluster architecture for high availability and disaster recovery. Unfortunately, we are getting our hardware resources in drops; we have enough resources now to build out one site (site1) and we expect the next drop later this fall/early winter to build out our second site (site2). After that time we expect additional resources to add indexers at each of the two sites.

Our initial hardware will support a 3 peer indexer cluster (with associated master, ds, lm, search, etc.). We would like to have rep factor 3 search factor 2. When we get the resources deployed at site2, we would like to rebalance the buckets from site1 using site2 resources.

Can we initially use the following master and indexer server.conf to enable multisite on day one, the update the conf when site2 is online?

master server.conf:

[general]
...
site = site1

[clustering]
available_sites = site1
mode = master
multisite = true 
pass4SymmKey = whatever
site_search_factor = origin:1,total:2
site_replication_factor = origin:2,total:3

indexers server.conf

[general]
site = site1
 ...
[clustering]
master_uri = https://master:8089
mode = slave
pass4SymmKey = whatever

Once site2 is online, we would update the master server.conf as:

[general]
...
site = site1

[clustering]
available_sites = site1,site2
mode = master
multisite = true 
pass4SymmKey = whatever
site_search_factor = origin:1,site1:1,site2:1,total:2
site_replication_factor = origin:2,site1:1,site2:1,total:3

The indexers server.conf would be configure for site1 or site2 as appropriate

[general]
site = site1|site2
 ...
[clustering]
master_uri = https://master:8089
mode = slave
pass4SymmKey = whatever

A forced data rebalance should then stream replicated and searchable copies of buckets to site2. Yes, this could take a while but eventually it would complete (I expect).

Any unforseen issues with this plan?

0 Karma
1 Solution

tsheets13
Communicator

I had a similar issue. I simply built out the single site cluster, then converted to multi-site cluster once my new hardware was in place at site 2.

If you follow this doc EXACTLY, you should find it works great.

https://docs.splunk.com/Documentation/Splunk/7.3.1/Indexer/Migratetomultisite

View solution in original post

tsheets13
Communicator

I had a similar issue. I simply built out the single site cluster, then converted to multi-site cluster once my new hardware was in place at site 2.

If you follow this doc EXACTLY, you should find it works great.

https://docs.splunk.com/Documentation/Splunk/7.3.1/Indexer/Migratetomultisite

hunderliggur
Path Finder

tsheets13 - thank you, that answers my question. I had not stumbled on that section yet!

Post-migration bucket behavior
After migration, all buckets created before migration continue to adhere to their single-site replication and search factor policies, by default. You can alter this behavior so that pre-migration buckets adhere instead to the multisite policies, by changing the constrain_singlesite_buckets setting on the master's server.conf to "false".

0 Karma

skalliger
Motivator

Maybe you could get four indexers.

This way, you could start with a 2 IDX + 2 IDX multisite cluster from day one and wouldn't have to bother with replicating all buckets manually later on.

Also, you'd need to start with a RF of 2, but increasing it later on to 3 wouldn't be a problem anyways.

Skalli

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...