Hello, we have a cluster of 6 indexers distributed on 2 sites and after a patching activity on these servers that required a restart of the indexers one by one (before we put the cluster in maintenance mode and then we remove it) we started to face an issue on some buckets. This is causing a failure in reaching both the Search Factor (SF=2) and the Replication Factor (RF=2). From the Bucket Status dashboard of the Monitoring Console we see that the only one specific peer has the buckets with: Status: Complete / Search State: Searchable while on the others: Status: NonStreamingTarget / Search State: Unsearchable We checked that the the configuration for these indexes are the same on all the indexers, in order to exclude problems due to misconfiguration. Looking in the splunkd.log of the interested indexer we found the following errors for many buckets: "Corrupt bucket report: bid=XXX error='Error while trying to search bucket=XXX (error='Failed to read compression bits from bucket=XXX - exception thrown: JournalSliceDirectory: Cannot seek to rawadata offet 0, path="XXX/rawdata" Please check/repair bucket path='XXX' wih 'fsck' as it could be corrupted.') Results may be incomplete!" We saw the from the Bucket status dashboard selecting "roll" and "resync" nothing happened. We did not try "Delete Copy" because we were not sure if it would delete the bucket only on the problematic indexer or on all indexers. Could you please confirm what would be delete with "Delete Copy"? Otherwise, in order to fix this issue, we could: Put the cluster in maintenance mode Stop Splunk on the interested peer Remove the corrupted bucket Start Splunk again on the indexer Force the resync from the CM Is this the actual correct approach to fix the problem? We would like to be sure before proceeding with the delete of the buckets because we do not want to lose the data. Thanks!
... View more