Deployment Architecture

Unable to see old data in index after move it's location on the filesystem

marrette
Path Finder

I'm working with a Splunk Enterprise 6.4.1 setup that has a index cluster spread over three Windows nodes. Typically when a new Index is required a mapped drive is created in d:\Splunk\var\lib\splunk.

However in error a new index was created but the mapped drive was not mapped into the correct location. So Splunk just went ahead and created a folder actually in the D drive in d:\Splunk\var\lib\splunk\NewIndex. I didn't noticed this until recently when the D drive on there indexers started getting very full.

To fix this I came up with the following solution of renaming the folder the data is in, then simply moving the mapped drive to the correct location and copying the data over. The steps I actually took were:
1. Disabled the forwarders sending data to this index.
2. On the cluster master I changed the definition of the index to include 'disabled = true' then distributed this config to all 3 indexers (this was so the Splunk instance could stay up whilst I moved this index around).
3. On all three indexers I then renamed the current folder d:\Splunk\var\lib\splunk\NewIndex to NewIndex.old, and changed the location of the mapped drive to become: d:\Splunk\var\lib\splunk\NewIndex.
4. I then copied the data from the incorrect location in NewIndex.old to the mapped drive (via copy and paste in one go).
5. Re-enabled the index definition again in it's config and distributed to all 3 indexers.
6. Re-enabled the forwarder sending data to this index.

Data started arriving in Splunk again and everything looked fine but Splunk had lost all the data from before I disabled the index! Comparing the size of both the old and new locations they are the same so i'm certian it all copied over.

I couldn't see anything in the _internal log about this but I wondered if copying a huge amound of data in Windows using the copy and paste function might have upset the permissions. So I did the same process as before to disabled the index, but this time updated the paths to point at the old director 'NewIndex.old' and re-enabled. But the same problem occured using the old unmoved data!

I notice at the Bucket Status screen on the cluster master there are several entired similar to this under "search factor":
"cannot fix up search factor as bucket is not serviceable"
Under Replication Factor:
"cannot fix up search factor as bucket is not serviceable"
And under 'Indexes With Excess Buckets' the NewIndex is listed - though all the columns have zeros in.

So I assume the problem is Splunk can't recongise the old pre-migration data. How do I get it to see it again?

Thanks
Eddie

Tags (2)
0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...