Deployment Architecture

can we split a big shcluster to two small one?

liangliang
Explorer

Now we have a big shcluster for many department users, for some reason, we must spilt a department for independent use. We considered creating a new cluster directly, but we have too many things to migrat
We plan to network isolate the existing cluster nodes, and then configure the isolated part to another cloned one, and finally delete the unnecessary apps on both clusters. Is this feasible?

Labels (3)
0 Karma
1 Solution

PickleRick
SplunkTrust
SplunkTrust

OK. There are some theoretical aspects but there is also the practical one - I'm not aware of any way of "splitting" the SHC as such and while you can bootstrap a clean SHC from the same deployer (but as @isoutamo said - it's probably not the best choice since you will want to keep those SHCs different after all).

So when we were doing this for one of our customers we did:

1. Spin up a new environment with completely clean deployer and clean SHs

2. Copy out selected apps from the old deployer to the new deployer

3. Copy out modified apps state from one of the SHCs and merged it with the apps on the new deployer (this one might not apply to you if your users don't have permissions to modify the apps on SHC and your admins do not do it either).

4. While modifying the built-in apps is in itself not the best practice, sometimes people do make those changes. We migrated changes from the built-in apps (like apps/search/local) into custom apps (i.e. search_migrated).

5. Migrating users and their content, if you want to do that, might be problematic. We didn't bother.

View solution in original post

liangliang
Explorer

@PickleRick @isoutamo @livehybrid @kiran_panchavat 
The test found that when some nodes are isolated, the isolated nodes will not elect a new caption, because it requires more than half of the total number of nodes. And the caption cannot be manually specified. The following is the returned information.

This node is not the captain of the search head cluster, and we could not determine the current captain. The cluster is either in the process of electing a new captain, or this member hasn't joined the pool


https://docs.splunk.com/Documentation/Splunk/9.4.2/DistSearch/SHCarchitecture#Captain_election

0 Karma

isoutamo
SplunkTrust
SplunkTrust
That was just it what I said earlier that you must set it manually in smaller SHC. But still I propose to you, that install another SHC environment from scratch and migrate the needed apps and users from old SHC. In that way you probably avoid some issues later on!
0 Karma

PickleRick
SplunkTrust
SplunkTrust

OK. There are some theoretical aspects but there is also the practical one - I'm not aware of any way of "splitting" the SHC as such and while you can bootstrap a clean SHC from the same deployer (but as @isoutamo said - it's probably not the best choice since you will want to keep those SHCs different after all).

So when we were doing this for one of our customers we did:

1. Spin up a new environment with completely clean deployer and clean SHs

2. Copy out selected apps from the old deployer to the new deployer

3. Copy out modified apps state from one of the SHCs and merged it with the apps on the new deployer (this one might not apply to you if your users don't have permissions to modify the apps on SHC and your admins do not do it either).

4. While modifying the built-in apps is in itself not the best practice, sometimes people do make those changes. We migrated changes from the built-in apps (like apps/search/local) into custom apps (i.e. search_migrated).

5. Migrating users and their content, if you want to do that, might be problematic. We didn't bother.

isoutamo
SplunkTrust
SplunkTrust

There is clear and simple steps how to migrate also users from old SHC or single node to a new SHC via deployer.
https://docs.splunk.com/Documentation/Splunk/9.4.1/DistSearch/Migratefromstandalonesearchheads
I have done this couple of times in several environments both for apps and users.

And don’t migrate anything from Splunk’s own system apps like search!

Then you must remember that if/when you have deploy those apps via deployer into SHC members you push everything into default directories. This means that if users have previously created those e.g. alerts by GUI then they cannot remove those what you have deployed by deployer. They can change those but admins must remove those from deployer and then push again into members. 

That seems to be quite common question from users side in this kind of cases!

0 Karma

PickleRick
SplunkTrust
SplunkTrust

"Then you must remember that if/when you have deploy those apps via deployer into SHC members you push everything into default directories. This means that if users have previously created those e.g. alerts by GUI then they cannot remove those what you have deployed by deployer. They can change those but admins must remove those from deployer and then push again into members."

That's not entirely true. Whether the settings get pushed into default or local depends on the push mode.

But yes, migrating contents is something that's best done with a bit of thinking, not just blindly copying files from point A to B.

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Yeh I have played with those modes and those helps in many cases and currently you can do things which haven’t been possible e.g. with 7.x versions. Those are defined here https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/PropagateSHCconfigurationchanges

But in curiosity how you manage those in this case that you could push initially all configurations from old SHC and later so that when user removed e.g. alerts from SHC by GUI and then it didn’t pop up after new apply without that you remove it first from deployer after initial push?

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Luckily, we didn't have to tackle this one. It was way more important to us to move the content created than to remove it later.

0 Karma

liangliang
Explorer

Yes, We have a lot of things manually created by users on the cluster, and we need to migrate some specific user content and ensure that the things created by users can be modified and deleted normally. We need a long time to sort out the content that needs to be migrated, which is also the reason why I didn't want to do it at the beginning. But now this is the only way.
1. We will first sort out the default push configurations and user created configurations for the clusters that need to be migrated.
2. Copy the content that needs to be migrated to the deployer of the new cluster, and place the user created content in the local directory
3. Perform a full mode push once

livehybrid
SplunkTrust
SplunkTrust

Hi @liangliang 

You can separate some of the SHC away but I would recommend using a new SH Deployer for them in order to effectively manage the cluster as

"The deployer sends the same configuration bundle to all cluster members that it services. Therefore, if you have multiple search head clusters, you can use the same deployer for all the clusters only if the clusters employ exactly the same configurations, apps, and so on." (See docs here). 

Be sure to clear the raft and re-bootstrap the cluster (see https://docs.splunk.com/Documentation/Splunk/9.4.1/DistSearch/Handleraftissues#Fix_the_entire_cluste...) to configure the captaincy.  

There is some other good background info at https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/SHCarchitecture which might also help.

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

isoutamo
SplunkTrust
SplunkTrust

If your two new SHC have the same content then you can theoretically use one deployer to manage those both, but I think that you want to keep content of those SHCs differently?

I haven’t try this, but maybe this works if you can separate those node physically in network level? Do this with your own risk! I’m expecting that this is not an supported way to do it!

Split those members into two groups and keep deployer in bigger group where is majority of nodes. This group should automatically recover the lost of other members. If not do normal stuff for removing members, sync SHC &kvstore. 

For second group you must replicate current deployer to it. In docs there is instructions how to replace/recover deployer. Then you probably need to do manually captain election to get another SHC up and running.

I’m not sure if you can change those deployers to new names or not. If not then you will probably get some issues later on!

I think that better way is just create additional SHC and deployer and then migrate needed apps and users from old to this new. This is official and supported way.

Anyhow you must do an offline backup from kvstore and nodes before start migration and definitely you should try it in test environment first!

liangliang
Explorer

Thanks for your answer, I will try this in a test environment

0 Karma

isoutamo
SplunkTrust
SplunkTrust
Anyhow I strongly recommend you to use that last option as also @PickleRick present!
0 Karma

kiran_panchavat
Champion

@liangliang 

Migration from a standalone searchhead to a SHC

Here is the document that discusses how to migration from a standalone to a Search Head Cluster:

https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/Migratefromstandalonesearchheads 

Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!
0 Karma

kiran_panchavat
Champion

@liangliang 

You can deploy search head cluster members across multiple physical sites. You can also integrate cluster members into a multisite indexer cluster. However, search head clusters do not have site awareness.

https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/DeploymultisiteSHC 

https://community.splunk.com/t5/Deployment-Architecture/How-multisite-SH-clusters-work/m-p/594465 

Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!
0 Karma

liangliang
Explorer

@kiran_panchavat  thanks for your answer, we want to split a big shcluster. not a mutisite cluster.

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

.conf25 Global Broadcast: Don’t Miss a Moment

Hello Splunkers, .conf25 is only a click away.  Not able to make it to .conf25 in person? No worries, you can ...

Observe and Secure All Apps with Splunk

 Join Us for Our Next Tech Talk: Observe and Secure All Apps with SplunkAs organizations continue to innovate ...

What's New in Splunk Observability - August 2025

What's New We are excited to announce the latest enhancements to Splunk Observability Cloud as well as what is ...