Deployment Architecture

How do I remove a corrupted bucket in an Indexer Cluster environment?

daniel_splunk
Splunk Employee
Splunk Employee

One of my indexers crashed in a cluster environment and left a corrupted bucket. Search will return error when hitting that bucket like this:

[indexer1] idx=os Could not read event: cd=21145:261500. Results may be incomplete ! (logging only the first such error; enable DEBUG to see the rest)

Any command to remove/fix the corrupted bucket as I can't shutdown the indexer to run fsck right now?

0 Karma
1 Solution

daniel_splunk
Splunk Employee
Splunk Employee

From the message, the bucket number is 21145 (cd:21145:61500), you can run below search to locate the actual bucket.

| dbinspect index=os
| search bucketId = *21145*
| table bucketId, guId, splunk_server, index, state

Once you get the bucketId, run below REST API to remove it.

splunk _internal call /services/cluster/master/buckets/<bucketId>/remove_from_peer -method POST -post:peer <guId>

View solution in original post

daniel_splunk
Splunk Employee
Splunk Employee

From the message, the bucket number is 21145 (cd:21145:61500), you can run below search to locate the actual bucket.

| dbinspect index=os
| search bucketId = *21145*
| table bucketId, guId, splunk_server, index, state

Once you get the bucketId, run below REST API to remove it.

splunk _internal call /services/cluster/master/buckets/<bucketId>/remove_from_peer -method POST -post:peer <guId>

gregbo
Communicator

How about a non-clustered bucket? Can I just delete it from the OS?

Get Updates on the Splunk Community!

Data Management Digest – December 2025

Welcome to the December edition of Data Management Digest! As we continue our journey of data innovation, the ...

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...