When it comes to looking for corrupt buckets I wonder if those two commands are morally the same - at least on some level. If they aren't the same how do they differ? Anyone have thoughts or experience on this?
The chief advantage of doing a
| dbinspect search is you can run that while Splunk is running. If you have searchhead affinity turned on in a multi-site index cluster then I'd imagine the results would be limited.
No doubt corrupt buckets are probably more a symptom of an underlying issue but their presence (rare as it might be) impacts searching. In theory though this is a way to proactively monitor for their presence without having to shut down your indexers.
No, we had several support tickets opened on this. Where user searches would report a corrupted bucket, but dbinspect would not identify it as corrupted.
dbinspect is only looking at the metadata files, so the only corruption it will find is in relation to the metadata.
if the underlying journal.gz holding the actual events get corrupted, this will not be detected.
Interesting! Appreciate the insight. I'm guessing then the fsck scan is 'full featured' in that it is looking for multiple aspects of corruption.
Unpacks and verifies the 'rawdata' component one or more buckets. 'rawdata' is the record of truth from which Splunk software can rebuild the other components of a bucket. This tool can be useful if you are worried or believe there may be data integrity problems in a set of buckets or index. Also you can use it to check for journal integrity prior to issuing a rebuild, if you wish to know whether the rebuild can complete successfully before running it.
If the journal.gz get corrupted, the bucket can't be recovered. It's my understanding that dbinspect does not look at or into the journal.gz.