Security

JournalSliceDirectory: Cannot seek to rawdata offset 0, path="..." on running search in Splunk Web on Splunk non clustered indexer

Explorer

I am using Splunk 6.6.2

When I ran search in Splunk Web for index for more than 30 days timeline "index="indextest" , I get this error:

alt text

JournalSliceDirectory: Cannot seek to rawdata offset 0, path="/opt/splunk/var/lib/splunk/indextest/db/db15023534821504459082_1/rawdata'

I have gone through some answers posted in Splunk and tried few fsck commands to repair
i ran the fsck scan command identified the corrupted buckets:

Eg:
splunk scan --all-buckets-all-indexes

output in unix:
Operating on: idx=indextest bucket='/opt/splunk/var/lib//splunk/indextest/db/db15023534821504459082_1/rawdata'

JournalSliceDirectory: Cannot seek to rawdata offset 0, path="/opt/splunk/var/li b/splunk/indextest/db/db15023534821504459082_1/rawdata"

Corruption: corrupt slicesv2.dat or slices.dat

Then tried to repair them:
splunk repair --all-buckets-all-indexes

Eg:
splunk fsck repair --one-bucket --index-name=indextest--bucket-name=db15023534821504459082_1 --try-warm-then-cold
output in unix:
Operating on: idx=indextest bucket='/opt/splunk/var/lib/splunk/indextest/db/db150235348215044590821/'
(entire bucket) Rebuild for bucket='/opt/splunk/var/lib/splunk/indextest/db/db
150235348215044590821' took 64.23 milliseconds
Repair entire bucket, index=indextest, tryWarmThenCold=1, bucket=/opt/splunk/var/lib/splunk/indextest/db/db150235348215044590821, exists=1, localrc=7, failReason=No bloomfilter in finalDir='/opt/splunk/var/lib/splunk/indextest/db/db150235348215044590821'

The issue is not resolved.. Then

I even tried disabling the index

/opt/splunk/bin/splunk disable index nameofyour_index

I started splunk up and enabled the index from the web gui and restarted splunk

Still the issue is not resolved.

Any help and hints appreciated

1 Solution

Explorer

I fixed it now.. I replaced the contents of corrupted bucket with the non corrupted bucket of same index and ran the following cmd for the corrupted bucket.

splunk fsck repair --one-bucket --index-name=indextest--bucket-name=db15023534821504459082_1 --try-warm-then-cold

Splunk repaired the corrupted index and the error is gone now.

View solution in original post

Explorer

I fixed it now.. I replaced the contents of corrupted bucket with the non corrupted bucket of same index and ran the following cmd for the corrupted bucket.

splunk fsck repair --one-bucket --index-name=indextest--bucket-name=db15023534821504459082_1 --try-warm-then-cold

Splunk repaired the corrupted index and the error is gone now.

View solution in original post

SplunkTrust
SplunkTrust

If you problem is resolved, please accept the answer to help future readers.

---
If this reply helps you, an upvote would be appreciated.
0 Karma

Ultra Champion
0 Karma

Explorer

Thanks so much for your time and attention!

I even tried rebuilding.. It failed due to failReason=No bloomfilter which is same happened with fsck repair command.
I have only one indexer server in the architecture. Please find the following details in the corrupted and non corrupted buckets I have in my index.

Files in corrupted bucket:
[splunk@hostname db150574903915057490290]$ ls
1505749039-1505749029-9561667152978923474.tsidx bloomfilter2 bucket
info.csv corrupt.all.marker Hosts.data rawdata Sources.data SourceTypes.data

Files in Non corrupted bucket:
[splunk@hostname db150580482415058030181]$ ls
1505804824-1505803018-5429584547022512555.tsidx bloomfilter bucket
info.csv Hosts.data optimize.result rawdata Sources.data SourceTypes.data

Can I get any info on how i can fix the corrupted bucket by replacing the buckets from working ones? Will deleting the corrupted ones help? I have same issues with internal indexes even like main, _audit..

0 Karma