Getting Data In

Copy index data between different, not connected instances

Gene
Path Finder

Hello Splunkers.

I have a question: we are now moving from old servers to new ones. We had 5 indexers, not clustered and now we are going to have 3 indexers.

We want to move all indexed data to new instances. I know the procedure of moving data, but I have an idea on how to make this process easier. Maybe you have tried this solution and know that it will or won't work.

 

The idea is to set data replication from e.g. OldIDX1 => NewIdx1, OldIDX2=>NewIdx2, OLDIDX3,4,5=>NewIdx3.
Can you please advise whether such 'solution' or 'Workaround' will work or this needs to be tested before?

 

With best Regards,

Gene

Labels (2)
Tags (1)
0 Karma
1 Solution

shivanshu1593
Builder

Data replication happens between the peers in an Indexer cluster. Per your topology to send old indexer 1 to new Indexer 1, old indexer 2 to new Indexer 2 and the rest to new Indexer 3 would require 3 separate clusters, with a dedicated CM and a SH for each cluster to complete them. 

So, can it be done? Yes (Not sure about the 3rd topology, as data distribution within the cluster cannot be targetted towards a single peer, as it fails the basic concept of data durability, but should be tested for a PoC). Can they be clubbed into a single cluster and then do it? that unfortunately cannot be done. Is it worth doing? Your call 🙂

Good luck for the migration.

Thank you,

Shivanshu

Thank you,
Shiv
###If you found the answer helpful, kindly consider upvoting/accepting it as the answer as it helps other Splunkers find the solutions to similar issues###

View solution in original post

Gene
Path Finder

Thank you for the reply, but actually I am interested whether data replication will work or not 🙂

 

0 Karma

shivanshu1593
Builder

Data replication happens between the peers in an Indexer cluster. Per your topology to send old indexer 1 to new Indexer 1, old indexer 2 to new Indexer 2 and the rest to new Indexer 3 would require 3 separate clusters, with a dedicated CM and a SH for each cluster to complete them. 

So, can it be done? Yes (Not sure about the 3rd topology, as data distribution within the cluster cannot be targetted towards a single peer, as it fails the basic concept of data durability, but should be tested for a PoC). Can they be clubbed into a single cluster and then do it? that unfortunately cannot be done. Is it worth doing? Your call 🙂

Good luck for the migration.

Thank you,

Shivanshu

Thank you,
Shiv
###If you found the answer helpful, kindly consider upvoting/accepting it as the answer as it helps other Splunkers find the solutions to similar issues###

shivanshu1593
Builder

The easiest, fastest and the safest way is to copy and paste the data to the new Indexers via rsync command in Linux, connect them to the SH and try searching the data.

Out of all the alternative methods available, this is the best and the safest route. You'll also have a copy of data in your old Indexers to refer to, whenever you need.

 

Hope this helps,

*** If it helped, please accept it as a solution. It helps others to find the solution more quickly ***

 

 

 

 

 

Thank you,
Shiv
###If you found the answer helpful, kindly consider upvoting/accepting it as the answer as it helps other Splunkers find the solutions to similar issues###
Get Updates on the Splunk Community!

Splunk Forwarders and Forced Time Based Load Balancing

Splunk customers use universal forwarders to collect and send data to Splunk. A universal forwarder can send ...

NEW! Log Views in Splunk Observability Dashboards Gives Context From a Single Page

Today, Splunk Observability releases log views, a new feature for users to add their logs data from Splunk Log ...

Last Chance to Submit Your Paper For BSides Splunk - Deadline is August 12th!

Hello everyone! Don't wait to submit - The deadline is August 12th! We have truly missed the community so ...