Installation

How to Migrate a lot of old data (tb)?

New Member

So looking at the docs moving the index buckets is generally how you move data. However, I'm migrating a lot of data from multiple servers to one server and apparently moving the index buckets will be difficult if there is a lot of data. In one of my previous questions about migration someone said :

If you are already clustered, then just add a new Indexer, wait for rebalance, then kill an old one. Do this over and over until you have nothing but new Indexers.

This sounds like it would be more suited to my needs but I do not have any idea how I would do it and what should I consider -- the indexers will be different versions.

edit: or is it not really viable?

We need to move terabytes of data.
I've looked at Splunk docs Any information on this process would be greatly appreciated 🙂

Labels (1)
0 Karma
1 Solution

SplunkTrust
SplunkTrust

I have completed this using the advice you have mentioned, our scenario was a 6-indexer cluster and we had to move from VM's to a new OS (and on different hardware so we couldn't re-attach the underlying storage).

The process was:

  • Add a new indexer into the existing cluster
  • Update universal forwarders to send data to the new indexer (and remove the indexer to be decomm'ed from the outputs.conf either in this step or a later one)
  • If running Splunk 6.6.x or newer put the peer into detention (if you are running an older version automatic detention will occur if you run out of disk space)
  • Offline the indexer to be decommissioned.
  • If running Splunk 6.5.x or newer rebalance the data ( splunk rebalance cluster-data -action start )

Those are the steps but not the exact order, depending on how large your maintenance window is and the speed of both your old and new hardware you might be able to remove the older indexers more quickly, or you may need to go more slowly.

Also note that the data rebalance can disrupt running searches so that will need to be done during a maintenance window.

In my environment it took approx 8 hours to replicate the data of an indexer, and it only partially happened until either I re-balanced the cluster or took a peer offline (which triggered the master to start moving buckets around).

Moving approx 6-8TB took a few days of work...one obvious step I didn't put into the list is to remove the decommissioned indexer from the cluster master / monitoring systems but I assume that should be obvious enough 🙂

View solution in original post

0 Karma

Esteemed Legend

Why are you asking this again if you already got an answer in another thread?

0 Karma

New Member

I wanted more specifics on the process as I'm fairly new

0 Karma

SplunkTrust
SplunkTrust

I have completed this using the advice you have mentioned, our scenario was a 6-indexer cluster and we had to move from VM's to a new OS (and on different hardware so we couldn't re-attach the underlying storage).

The process was:

  • Add a new indexer into the existing cluster
  • Update universal forwarders to send data to the new indexer (and remove the indexer to be decomm'ed from the outputs.conf either in this step or a later one)
  • If running Splunk 6.6.x or newer put the peer into detention (if you are running an older version automatic detention will occur if you run out of disk space)
  • Offline the indexer to be decommissioned.
  • If running Splunk 6.5.x or newer rebalance the data ( splunk rebalance cluster-data -action start )

Those are the steps but not the exact order, depending on how large your maintenance window is and the speed of both your old and new hardware you might be able to remove the older indexers more quickly, or you may need to go more slowly.

Also note that the data rebalance can disrupt running searches so that will need to be done during a maintenance window.

In my environment it took approx 8 hours to replicate the data of an indexer, and it only partially happened until either I re-balanced the cluster or took a peer offline (which triggered the master to start moving buckets around).

Moving approx 6-8TB took a few days of work...one obvious step I didn't put into the list is to remove the decommissioned indexer from the cluster master / monitoring systems but I assume that should be obvious enough 🙂

View solution in original post

0 Karma

New Member

Thanks a lot 🙂

0 Karma

SplunkTrust
SplunkTrust

That sounds like a really... cavalier... way of going about it, but since it's @woodcock who said it, then I would bet it will work fine. It has the major advantage that there is nothing much to fat finger, and very few moving parts, other than the data that is auto replicating itself to the new servers. You also get to choose your timing for the move, and after one of these you'll know how much of a hit you'll take for how long during the migration.

https://answers.splunk.com/answers/567791/migrating-several-splunk-instances-into-one-1.html

(It's best to include the name and a link, to minimize the screams and flailing and technicolor from those of us who are more belt-and-suspenders-and-duct-tape-and-safety-pin types. )

A few terabytes is nothing to be scared of.

Esteemed Legend

Quick, cheap, good: pick 2. In this case, I pick "cheap" and "good"; it will definitely be slow.

SplunkTrust
SplunkTrust

@woodcock - I want to hear the quick and good (but not cheap) solution. Does it involve a new SAN?

0 Karma

Esteemed Legend

I am always looking to reduce risk. This approach exploits the whole reason that clustered indexers even exist (oops; an indexer died, but no problem...) and it gets a TON of testing. It is guaranteed to work and if you have any problems, you will get good support. I very much like to stay on the beaten path when manipulating indexed data.

Esteemed Legend

Also, no downtime at all and you don't need to wait for a maintenance window. Just start rocking out.

0 Karma