Getting Data In

move data from standalone indexers to cluster


I have 3 standalone indexers, and another 3 indexers in a cluster.  We want to decommission the 3 standalones but first, have to move the data off the 3  onto the cluster.   I imagine the process would be something like to roll all hot buckets to warm.. then rsync the warm and cold mounts/directory to a temp directory on one of the idx cluster members? standalone 1 to idxcluster 1,, 2 to 2, then 3 to 3..

But when we do rsync the data over.. How do i get the new indexer to recognize the old imported data?  is it as simple as merging the old imported data into the appropriate index directory on the new indexer?  for example.. copy the old wineventlog index, into the same named directory on the  new indexer? would that work or is there  more to it? 

Is there some kind of splunk native command to move all data from idx A to idx B? Is there a better (or correct) way to make the new idx recognize the imported data?

I appreciate any help!  Thanks.

Labels (2)
0 Karma



It's just like @richgalloway said, there is no supported way how anyone can do it.

You could try the next if it works, BUT do it by your own risk/responsibility! I haven't test this!

When you are reading that splunk docs it said that it don't convert existing buckets to clustered when you add an old individual indexer to cluster. Based on that you could try (IF You have TEST environment to do it) to emulate this behaviour as moving those buckets to your cluster. BUT it needs at least the next:

  • You cannot have the same name for any clustered index what you are moving (eg. any internal indexes)
  • You cannot create those new buckets as a clustered buckets

If you cannot fulfil above, then you cannot try the next! Instead of you should ask help from Splunk Professional Services!

  1. Stop old indexer 
  2. transfer wanted buckets from source peer to individual target node (not _* buckets can moved or any already clustered indexed with same names!) 
  3. Copy $SPLUNK_DB/<index name>.dat file from source to target peer for every moved index on every indexer which have that index
  4. Update CM's indexes conf with those new indexes. Be sure that those new indexes have attribute repFactor = 0 not as auto!
  5. Apply a new indexes conf on your CM to your cluster.
  6. Test, test, test.

If this fails you will be in situation where your cluster is probably down. To fix that you probably must manually edit indexes.conf files to remove those index configurations from it. It's also needed to remove from CM and do a new apply.

But as I said. You must test this first on your test environment, then ensure that those requirements are fulfil and reserve some time for possible downtime when you doing that on production.

And once again, You will do it by your own risk and responsibility!!!

r. Ismo

0 Karma


There is no documented way to do that.  Splunk recommends engaging Professional Services for that situation.  See

It's not as simple as copying data from one indexer to another because care must be taken to ensure bucket IDs are not duplicated.

If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

Registration for Splunk University is Now Open!

Are you ready for an adventure in learning?   Brace yourselves because Splunk University is back, and it's ...

Splunkbase | Splunk Dashboard Examples App for SimpleXML End of Life

The Splunk Dashboard Examples App for SimpleXML will reach end of support on Dec 19, 2024, after which no new ...

Understanding Generative AI Techniques and Their Application in Cybersecurity

Watch On-Demand Artificial intelligence is the talk of the town nowadays, with industries of all kinds ...