Working on migrating from a RHEL 6 VM running splunk 8.0.5 to a RHEL 8 VM with splunk latest 8.2.6 (no clustering) Read and followed the installation and migration docs and I've been able to test with some old data that its working. But another thing I'd like to do is optimize the indexes better as well and put them on new VM disks and distribute between hot/warm and cold/frozen, the problem is our indexes are pretty big.
My understanding is that in order to move/migrate the indexes, I'll need to stop splunk on the old host and copy/rsync the directories over, then modify indexes on the new host and start splunk on it (of course with the required DNS pointing and forwarders reconfig to point to the new host). But the volumes I have are about 3TB, 3TB and a large one 25TB. I tested rsync with some data directories and it looks like it would take several days. For the large volume I don't think I have any other choice but to remove the disks from the old VM and attach to the new one. But even for the smaller 2 volumes it looks like it will take almost 24 hours to copy over 5-6 TB and I don't think I can keep splunk stopped for that long, it would definitely loose data.
Am I understanding this correctly or is there a better and/or quicker way to do this?
The reason I wanted to use new VM disks is because the old host has several VM disks combined to make each of the OS volumes and its just messy (e.g. the 25TB mount point has 3 underlying disks) plus with new disks I can also distribute the indexes and hot/cold buckets better between fast and not-so-fast storage.
Would really appreciate if anyone can provide any suggestions/advice.
Hi
One way what I have used several times when I have been migrating big volumes (actually any size) to the new host is doing a rsync several times.
With that way the amount of files transferred on final sync is not so much (of course it's depending on your daily volumes). You could also add delete to the last sync before stopping old IDX to get rid of those before final sync (it's even quicker that way and you will get time estimate for it).
r. Ismo
@gcusello Thanks that was an interesting approach but I wasn't too sure as the copy itself was going to take too long and we couldn't stop splunk for several days.
@isoutamo Thanks very much. I too was thinking about doing repeated rsyncs over time but your steps laid it out very nicely. I did the final cutover yesterday and everything worked correctly. I had an lvm issue due to a name conflict but nothing major. Thanks again
Hi @hasan,
your analysis is correct: the correct approach is to stop old IDXs, copy indexes on the new machine (eventually changing the location of cold or hot/warm data) and restarting the new one; but for larger indexes it takes too much time.
As a workaround, you could create a list of the cold/frozen buckets you have at the staring point (for the final check) and then copy all of them, that should be the most, on the new server.
Then, when you'll finish, you could follow the correct procedure and stop the old machine and copy the hot/warm buckets, at the end, before restarting the new IDXs, you should check with the initial list that in the meantime some warm bucket was moved to cold, and eventually copy also those ones.
I never tried this approach but it should run.
Ciao.
Giuseppe
Hi
One way what I have used several times when I have been migrating big volumes (actually any size) to the new host is doing a rsync several times.
With that way the amount of files transferred on final sync is not so much (of course it's depending on your daily volumes). You could also add delete to the last sync before stopping old IDX to get rid of those before final sync (it's even quicker that way and you will get time estimate for it).
r. Ismo