All Apps and Add-ons

Consolidate Databases from multiple splunk instances

ghnwmlguy
Explorer

I currently have two instances of splunk running on two separate hosts. I recently purchased a license so that I can consolidate the two onto one host. Is there a way to consolidate indexed logs/databases onto one host without losing data?

Tags (1)
0 Karma
1 Solution

hulahoop
Splunk Employee
Splunk Employee

Yes, this is possible. However, if you have 2 separate servers it may be best to keep both and have one distribute searches to the other. This way you are effectively searching both Splunk servers and get the added bonus of 2 servers sharing the work and executing in parallel. More on distributed search if this interests you: http://www.splunk.com/base/Documentation/latest/Admin/Whatisdistributedsearch.

If, however, you are looking to re-purpose one of the servers and truly need to consolidate your datastore, then the process is similar to backing up your Splunk datastore, covered here: http://www.splunk.com/base/Documentation/latest/Admin/Backupindexeddata.

This is the skeleton process (assuming you have enough storage):

  1. Redirect the incoming data stream to Splunk1
  2. Shut down Splunk2
  3. Roll the hot bucket on Splunk2 to grab the latest data
  4. Move all buckets to Splunk1 after ensuring bucket sequence ids are unique
  5. Repeat steps 3 and 4 for each index.

Steps 1 and 2 are self-explanatory.

For step 3, you can issue this command on the CLI:

./splunk _internal call /data/indexes/<index_name>/roll-hot-buckets –auth <admin_username>:<admin_password>

For Step 4, on Splunk1 and Splunk2, look in

  • $SPLUNK_HOME/var/lib/splunk/defaultdb/db
  • $SPLUNK_HOME/var/lib/splunk/defaultdb/colddb

The directories in these folders all have a unique sequence ID at the end of the directory name:

db_#_#_id

You need to ensure all the directories in Splunk1 and Splunk2 have a unique ID. Write a script or change the sequence ID manually if there are any duplicates between Splunk1 and Splunk2. Then move all the directories from

  • Splunk2: $SPLUNK_HOME/var/lib/splunk/defaultdb/db
  • Splunk2: $SPLUNK_HOME/var/lib/splunk/defaultdb/colddb

to

  • Splunk1: $SPLUNK_HOME/var/lib/splunk/defaultdb/colddb

View solution in original post

hulahoop
Splunk Employee
Splunk Employee

Yes, this is possible. However, if you have 2 separate servers it may be best to keep both and have one distribute searches to the other. This way you are effectively searching both Splunk servers and get the added bonus of 2 servers sharing the work and executing in parallel. More on distributed search if this interests you: http://www.splunk.com/base/Documentation/latest/Admin/Whatisdistributedsearch.

If, however, you are looking to re-purpose one of the servers and truly need to consolidate your datastore, then the process is similar to backing up your Splunk datastore, covered here: http://www.splunk.com/base/Documentation/latest/Admin/Backupindexeddata.

This is the skeleton process (assuming you have enough storage):

  1. Redirect the incoming data stream to Splunk1
  2. Shut down Splunk2
  3. Roll the hot bucket on Splunk2 to grab the latest data
  4. Move all buckets to Splunk1 after ensuring bucket sequence ids are unique
  5. Repeat steps 3 and 4 for each index.

Steps 1 and 2 are self-explanatory.

For step 3, you can issue this command on the CLI:

./splunk _internal call /data/indexes/<index_name>/roll-hot-buckets –auth <admin_username>:<admin_password>

For Step 4, on Splunk1 and Splunk2, look in

  • $SPLUNK_HOME/var/lib/splunk/defaultdb/db
  • $SPLUNK_HOME/var/lib/splunk/defaultdb/colddb

The directories in these folders all have a unique sequence ID at the end of the directory name:

db_#_#_id

You need to ensure all the directories in Splunk1 and Splunk2 have a unique ID. Write a script or change the sequence ID manually if there are any duplicates between Splunk1 and Splunk2. Then move all the directories from

  • Splunk2: $SPLUNK_HOME/var/lib/splunk/defaultdb/db
  • Splunk2: $SPLUNK_HOME/var/lib/splunk/defaultdb/colddb

to

  • Splunk1: $SPLUNK_HOME/var/lib/splunk/defaultdb/colddb

yannK
Splunk Employee
Splunk Employee

since the index-clustering exists, the recent buckets folder do contains an extra information, the GUID of the original indexer.

Because of that, it can help to avoid bucketID collisions.
But beware that if you have hot buckets, or old buckets (prior to splunk 6 or clusterting setup), you still want to check the bucketid.

0 Karma

hulahoop
Splunk Employee
Splunk Employee

No need to apologize, just let us know how it goes when you have the time to revisit this. 🙂

0 Karma

ghnwmlguy
Explorer

I apologize for not giving the thumbs up yet...I have run into space issues on the primary host and need to put in new drive. When that is done I will use this process.

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...