Deployment Architecture

Unable to see old data in index after move it's location on the filesystem

Path Finder

I'm working with a Splunk Enterprise 6.4.1 setup that has a index cluster spread over three Windows nodes. Typically when a new Index is required a mapped drive is created in d:\Splunk\var\lib\splunk.

However in error a new index was created but the mapped drive was not mapped into the correct location. So Splunk just went ahead and created a folder actually in the D drive in d:\Splunk\var\lib\splunk\NewIndex. I didn't noticed this until recently when the D drive on there indexers started getting very full.

To fix this I came up with the following solution of renaming the folder the data is in, then simply moving the mapped drive to the correct location and copying the data over. The steps I actually took were:
1. Disabled the forwarders sending data to this index.
2. On the cluster master I changed the definition of the index to include 'disabled = true' then distributed this config to all 3 indexers (this was so the Splunk instance could stay up whilst I moved this index around).
3. On all three indexers I then renamed the current folder d:\Splunk\var\lib\splunk\NewIndex to NewIndex.old, and changed the location of the mapped drive to become: d:\Splunk\var\lib\splunk\NewIndex.
4. I then copied the data from the incorrect location in NewIndex.old to the mapped drive (via copy and paste in one go).
5. Re-enabled the index definition again in it's config and distributed to all 3 indexers.
6. Re-enabled the forwarder sending data to this index.

Data started arriving in Splunk again and everything looked fine but Splunk had lost all the data from before I disabled the index! Comparing the size of both the old and new locations they are the same so i'm certian it all copied over.

I couldn't see anything in the _internal log about this but I wondered if copying a huge amound of data in Windows using the copy and paste function might have upset the permissions. So I did the same process as before to disabled the index, but this time updated the paths to point at the old director 'NewIndex.old' and re-enabled. But the same problem occured using the old unmoved data!

I notice at the Bucket Status screen on the cluster master there are several entired similar to this under "search factor":
"cannot fix up search factor as bucket is not serviceable"
Under Replication Factor:
"cannot fix up search factor as bucket is not serviceable"
And under 'Indexes With Excess Buckets' the NewIndex is listed - though all the columns have zeros in.

So I assume the problem is Splunk can't recongise the old pre-migration data. How do I get it to see it again?


Tags (2)
0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...