Installation

How to backup/restore Splunk db to new system

Jaci
Splunk Employee
Splunk Employee

Seeking advice for how to best backup/restore splunk databases to newly built systems with minimal application downtime for 3 index servers. Current OS - RHEL, to be OS - Oracle Ent Linux. Current install path: /splunk1 /splunk2 /splunk3 on relative system. New install path: /opt/splunk. Currently have Veritas configured but not effectively implemented/utilized (part of reason to rebuild), new systems will not have clustering installed. Splunk version 4.0.8

Tags (2)
1 Solution

Chris_R_
Splunk Employee
Splunk Employee

Just to add detail to these steps & after talking to Zach who explained they are doing a hardware switchover. Looks like they may not have the option to run concurrent systems with duplicate data, Although it is the ideal scenario especially for that much data!

If you must do a hard/switchover migration follow these steps
1) ensure the indexes are all created on your new server so the db dir structure is in place.
2) Backup your warm / cold buckets, and copy / move to new data store location (IMPORTANT do NOT backup your hot buckets)
warm buckets
$SPLUNK_HOME/var/lib/splunk/< indexname >/db/db_NN_NN_NN dirs
cold buckets
$SPLUNK_HOME/var/lib/splunk/< indexname >/coldb/db_NN_NN_N dirs
3) perform a splunk roll to warm command to get your hot buckets, rolled to warm. I would recommend running it on the CLI so you know when it's complete. From the CLI:

For all indexes
./splunk search "| debug cmd=roll"

or individual indexes
./splunk search "| debug cmd=roll index= < indexname > "

4) Immediately stop your production splunk so no new data is coming in, Although you will lose some data from the time the roll to warm completes and new data coming in to your hot buckets.
5) There will be new warm buckets created, from the roll to warm command backup these new warm buckets, and copy / move to new data store location warm buckets
$SPLUNK_HOME/var/lib/splunk/< indexname >/db/db_NN_NN_NN dirs

6) start up splunk and search on your old data

View solution in original post

Chris_R_
Splunk Employee
Splunk Employee

Just to add detail to these steps & after talking to Zach who explained they are doing a hardware switchover. Looks like they may not have the option to run concurrent systems with duplicate data, Although it is the ideal scenario especially for that much data!

If you must do a hard/switchover migration follow these steps
1) ensure the indexes are all created on your new server so the db dir structure is in place.
2) Backup your warm / cold buckets, and copy / move to new data store location (IMPORTANT do NOT backup your hot buckets)
warm buckets
$SPLUNK_HOME/var/lib/splunk/< indexname >/db/db_NN_NN_NN dirs
cold buckets
$SPLUNK_HOME/var/lib/splunk/< indexname >/coldb/db_NN_NN_N dirs
3) perform a splunk roll to warm command to get your hot buckets, rolled to warm. I would recommend running it on the CLI so you know when it's complete. From the CLI:

For all indexes
./splunk search "| debug cmd=roll"

or individual indexes
./splunk search "| debug cmd=roll index= < indexname > "

4) Immediately stop your production splunk so no new data is coming in, Although you will lose some data from the time the roll to warm completes and new data coming in to your hot buckets.
5) There will be new warm buckets created, from the roll to warm command backup these new warm buckets, and copy / move to new data store location warm buckets
$SPLUNK_HOME/var/lib/splunk/< indexname >/db/db_NN_NN_NN dirs

6) start up splunk and search on your old data

Simeon
Splunk Employee
Splunk Employee

There are a few questions you should first answer before undertaking this:

  1. Can you concurrently run Splunk in both environments?
  2. What are your storage limitations?
  3. How is data currently being backed up?

If you can run both Splunk environments concurrently, then doing a cutover of the inputs and setting up distributed search to the old system would be the ideal scenario. Assuming you test and stage on the new environments, the cutover should be pretty seamless.

If you are unable to run both environments concurrently and have storage limitations, then you will need to plan the methodology for copying over the individual buckets. You can very easily configure Splunk to use a new path for the indexes/dbs. The hard part is getting them onto the new system as 2-3TB may take a very long time to move.

One potential solution is to use an NFS mount for the data that is residing on the initial machine. This solution would depend on the reliability of the current disks and NFS.

ppang
Splunk Employee
Splunk Employee

tried this on splunk4.2.2, and got the following error in CLI

ppang-mbp-2:bin paulpang$ ./splunk search "| debug cmd=roll index=main"
FATAL: Error in 'DebugCommand': command=roll issued successfully to index=main, but debug command is deprecated, try the CLI command instead

0 Karma
Get Updates on the Splunk Community!

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...