I have recently created a version 8.0.6 Index cluster with S2 enabled.
When necessary I would like to spin up a temp Splunk Server to read historic data from S2. A use case could be if there is an incident and the investigators need like 2 years of data out of S2 to forensicate.
We don't want to pull that amount of historic data back into our production cache as it will cause performance issues and shorten the local cache availability.
Does anyone know how to attach a Splunk instance ad hoc to an existing s2 bucket?
I presume the new instance will need all the existing indexes.conf info but I am not sure what else is required.
Please advise.
Thank you!
I'm not sure if there is a better way or not, but one option is to transfer the desired data from S2 into the thawed folders on the stand-alone instance. You will need the indexes.conf info from the production system.
One advantage of using the thawed folder is the stand-alone system will not try to manage the data. This means the data will remain in place while investigators do their thing and the production S2 will not be affected by anything happening on the stand-alone system.
I'm not sure if there is a better way or not, but one option is to transfer the desired data from S2 into the thawed folders on the stand-alone instance. You will need the indexes.conf info from the production system.
One advantage of using the thawed folder is the stand-alone system will not try to manage the data. This means the data will remain in place while investigators do their thing and the production S2 will not be affected by anything happening on the stand-alone system.
Thank you for the reply.
Followup question:
Looking for docs on "thawing" s2 data into a "thawded directory"...
Have you done this before?