If you cannot borrow additional disk space for colddb for migration time then you have two options lost / frozen some data which don’t have space during migration as guided on previous post replace nodes one by one and detach / attach SAN disks from old to the new 1st one is much safer and easier option. 2nd one can lead situations when you could lose events in worst case. On option 2 you should set up a new node without SAN replicate SSD storage, splunk software and configuration from old node. If you are using rpm or dep packages, install first then use rsync to replicate it from old node. Be sure that you have splunk.secret, GUID and all other configuration from old. Then shutdown old instance, replicate from old final sync with rsync remove option, detach SAN disk and move those to the new node. Ensure that you have correct volume group and file system definitions with correct permissions. After that you should bring a new node up as an old one. Probably it’s good to increase some timeout options for cluster to avoid unneeded bucket replications within other peers in cluster. As you see this is little bit complicate procedure, but it’s doable. Fortunately you have test environment where you could training with this and write step by step instructions for production migration.
... View more