OK. There are some theoretical aspects but there is also the practical one - I'm not aware of any way of "splitting" the SHC as such and while you can bootstrap a clean SHC from the same deployer (bu...
See more...
OK. There are some theoretical aspects but there is also the practical one - I'm not aware of any way of "splitting" the SHC as such and while you can bootstrap a clean SHC from the same deployer (but as @isoutamo said - it's probably not the best choice since you will want to keep those SHCs different after all). So when we were doing this for one of our customers we did: 1. Spin up a new environment with completely clean deployer and clean SHs 2. Copy out selected apps from the old deployer to the new deployer 3. Copy out modified apps state from one of the SHCs and merged it with the apps on the new deployer (this one might not apply to you if your users don't have permissions to modify the apps on SHC and your admins do not do it either). 4. While modifying the built-in apps is in itself not the best practice, sometimes people do make those changes. We migrated changes from the built-in apps (like apps/search/local) into custom apps (i.e. search_migrated). 5. Migrating users and their content, if you want to do that, might be problematic. We didn't bother.
Hi @liangliang You can separate some of the SHC away but I would recommend using a new SH Deployer for them in order to effectively manage the cluster as "The deployer sends the same configuration...
See more...
Hi @liangliang You can separate some of the SHC away but I would recommend using a new SH Deployer for them in order to effectively manage the cluster as "The deployer sends the same configuration bundle to all cluster members that it services. Therefore, if you have multiple search head clusters, you can use the same deployer for all the clusters only if the clusters employ exactly the same configurations, apps, and so on." (See docs here). Be sure to clear the raft and re-bootstrap the cluster (see https://docs.splunk.com/Documentation/Splunk/9.4.1/DistSearch/Handleraftissues#Fix_the_entire_cluster) to configure the captaincy. There is some other good background info at https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/SHCarchitecture which might also help. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Ricklepick. Thanks for your info. We have a lot of data in our network. We lost logs form this device at approx 05:15 in the morning, local time. At that time, there isn`t a lot of traffic on our...
See more...
Hi Ricklepick. Thanks for your info. We have a lot of data in our network. We lost logs form this device at approx 05:15 in the morning, local time. At that time, there isn`t a lot of traffic on our network. We had not excperinced any lack of connectivty in that period where we was missing these logs from this device. I If it was the loadbalancer, then we should miss logs from more one device. Our syslogs sources are sending logs through the netscaler to syslog servers. Syslogservers are sending then the syslogs to SPlunk index Cluster, which are sending it to the Heavy forwarders. Brgds DD
Thank you for your reply @livehybrid 1) Yes, they are unique 2) Yes, I thought about that, but could find only on one indexer. I had not touched the indexers for 3-4 days, and today the confli...
See more...
Thank you for your reply @livehybrid 1) Yes, they are unique 2) Yes, I thought about that, but could find only on one indexer. I had not touched the indexers for 3-4 days, and today the conflicting bucket appeared on 2 indexers, I renamed on both indexers. I'll check tomorrow again to see if it made any difference
If your two new SHC have the same content then you can theoretically use one deployer to manage those both, but I think that you want to keep content of those SHCs differently? I haven’t try this, b...
See more...
If your two new SHC have the same content then you can theoretically use one deployer to manage those both, but I think that you want to keep content of those SHCs differently? I haven’t try this, but maybe this works if you can separate those node physically in network level? Do this with your own risk! I’m expecting that this is not an supported way to do it! Split those members into two groups and keep deployer in bigger group where is majority of nodes. This group should automatically recover the lost of other members. If not do normal stuff for removing members, sync SHC &kvstore. For second group you must replicate current deployer to it. In docs there is instructions how to replace/recover deployer. Then you probably need to do manually captain election to get another SHC up and running. I’m not sure if you can change those deployers to new names or not. If not then you will probably get some issues later on! I think that better way is just create additional SHC and deployer and then migrate needed apps and users from old to this new. This is official and supported way. Anyhow you must do an offline backup from kvstore and nodes before start migration and definitely you should try it in test environment first!
@liangliang Migration from a standalone searchhead to a SHC Here is the document that discusses how to migration from a standalone to a Search Head Cluster: https://docs.splunk.com/Documentation/...
See more...
@liangliang Migration from a standalone searchhead to a SHC Here is the document that discusses how to migration from a standalone to a Search Head Cluster: https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/Migratefromstandalonesearchheads
@liangliang You can deploy search head cluster members across multiple physical sites. You can also integrate cluster members into a multisite indexer cluster. However, search head clusters do not ...
See more...
@liangliang You can deploy search head cluster members across multiple physical sites. You can also integrate cluster members into a multisite indexer cluster. However, search head clusters do not have site awareness. https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/DeploymultisiteSHC https://community.splunk.com/t5/Deployment-Architecture/How-multisite-SH-clusters-work/m-p/594465
Now we have a big shcluster for many department users, for some reason, we must spilt a department for independent use. We considered creating a new cluster directly, but we have too many things to m...
See more...
Now we have a big shcluster for many department users, for some reason, we must spilt a department for independent use. We considered creating a new cluster directly, but we have too many things to migrat We plan to network isolate the existing cluster nodes, and then configure the isolated part to another cloned one, and finally delete the unnecessary apps on both clusters. Is this feasible?
Hey @Splunkers, Looking for valuable insights for this use case. I wanted to extract the numbers at the end of the log (highlighted in bold). Pls help. Sample log: 74.133.120.000 - LASTHOP:142...
See more...
Hey @Splunkers, Looking for valuable insights for this use case. I wanted to extract the numbers at the end of the log (highlighted in bold). Pls help. Sample log: 74.133.120.000 - LASTHOP:142.136.168.1 - [19/May/2025:23:30:12 +0000] "GET /content/*/residential.existingCustomerProfileLoader.json HTTP/1.1" 200 143 "/cp/activate-apps?cmp=dotcom_sms_selectapps_111324" "Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Mobile Safari/537.36" 384622
If you search for only that first single event in that time index=abc,source=xxx.trc GetDbfRecordFromCache do nothing else, but then look at the _raw event in the display, are the characters encode...
See more...
If you search for only that first single event in that time index=abc,source=xxx.trc GetDbfRecordFromCache do nothing else, but then look at the _raw event in the display, are the characters encoded in the data or are then <>? If you then open the event with the little arrow and select Show Source what does the raw event data look like - is it encoded or not?