Deployment Architecture

Indexer Cluster Migration from old to new physical servers

96nick
Communicator

I would like a sanity check on if my plan is sound when it comes to my indexer cluster migration. Currently I'm changing the following:

  • Migrating indexer cluster from old hardware to new hardware
  • Implementing new indexes.conf to take advantage of volumes and to address changes in partitions

Some misc notes:

  • The indexers on on Linux and moving to a server with Linux
  • Version is staying the same

I have the following challenges with this that I need to address during the move:

  • We presently have everything logging to /opt/splunk/var/lib/splunk..., this will be changed to take advantage of our new SSDs (/fastdisk) with cold/non priority data going to HDD (/slowdisk). Nothing will go to the old location on the new systems.
  • The partition on new /opt is smaller than the one on new /opt, so a straight cp+paste won't work. 
  • New indexes.conf brings new challenges to this migration making sure everything is correct.

Plan:

  1. Rewrite Indexes.conf to include volumes, and for other changes made due to physical server move
  2. Rsync data from idx1 to idx1-new (while Splunk is running) (hot+warm+cold)
  3. Install Splunk (7.3.3) on idx1-new and copy config over
  4. Verify all ports are open on new system that need to be
  5. Mount NAS on new indexer (frozen data only)
  6. Update SPLUNK_DB in etc/splunk-launch.conf
  7. Re-enter bindDNpassword (or you will lockout your AD account in .../authentication.conf)
  8. Put CM in maintenance mode
  9. Turn off idx1, remove idx1 from cluster, do final rsync to idx1-new
  10. Change hostnames from idx1-new to idx1 (DNS)
  11. Place indexes.conf in .../etc/system/local (temporary)
  12. Start Splunk
  13. Add idx1-new to CM, restart Splunk on idx1-new
  14. Repeat for second indexer.
  15. Place new indexes.conf on CM, push out to indexers, remove ESL/indexes.conf

How does that sound?

Labels (3)
1 Solution

richgalloway
SplunkTrust
SplunkTrust

Let the cluster do the work for you.

  1. Add the new indexers to the cluster
  2. Put the old indexers into manual detention.  This stops them from accepting new data so all new data will be on the new indexers.
  3. Stop ONE indexer using splunk off-line enforce-counts.  This will make sure the indexers buckets are transferred to other (new) indexers.
  4. Wait for the indexer to shutdown.
  5. Repeat steps 3-4 with each of the remaining old indexers one at a time.
  6. Retire the old indexers.

One problem you'll have with either approach is the different indexer configurations needed.  You can't do that from the Cluster Master because the CM will push the same config files to all indexers.  You'll need to manually put the correct indexes.conf file on each indexer and remove all indexes.conf from the CM.  Once the migration is complete you can put indexes.conf back on the CM.

---
If this reply helps you, Karma would be appreciated.

View solution in original post

richgalloway
SplunkTrust
SplunkTrust

Let the cluster do the work for you.

  1. Add the new indexers to the cluster
  2. Put the old indexers into manual detention.  This stops them from accepting new data so all new data will be on the new indexers.
  3. Stop ONE indexer using splunk off-line enforce-counts.  This will make sure the indexers buckets are transferred to other (new) indexers.
  4. Wait for the indexer to shutdown.
  5. Repeat steps 3-4 with each of the remaining old indexers one at a time.
  6. Retire the old indexers.

One problem you'll have with either approach is the different indexer configurations needed.  You can't do that from the Cluster Master because the CM will push the same config files to all indexers.  You'll need to manually put the correct indexes.conf file on each indexer and remove all indexes.conf from the CM.  Once the migration is complete you can put indexes.conf back on the CM.

---
If this reply helps you, Karma would be appreciated.

96nick
Communicator

Rich,

Troubleshooting question regarding this topic. I removed indexes.conf from the CM and placed it in ..etc/system/local of my old indexers and pushed a bundle out. I also placed the new indexes.conf on both of my new indexers. Upon adding a new indexer to the existing cluster I received the following error in splunkd.log on the new indexer:

App='system' with replicated index='_introspection' is neither in the bundle downloaded from master nor managed by local deployment client. Either define this index at the master or  specify repFactor=0 on peer to skip replication.

This error repeated for all indexes that we have. Since the CM doesn't have the indexes.conf and it's all local, does that change how I should approach this? Or would you have an idea on how to attach this?

 

Thanks!

0 Karma
Get Updates on the Splunk Community!

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...