Splunk Enterprise

indexes in fixup stuck

jariw
Path Finder

Hi,

We have two indexes wich are stuck in fixeup task.  Our environment exist off  some indexing peers  wich are atached to smartstore.  

This mornig there is a warning no sf and rf is met. Two indexes are in this degraded state. Checking the bucket status there are two buckets from two different indexes whish doesn't get fixed. Those buckets are mentioned in the search factor fix, replication factor fix and generation. The last has the notice "No possible primaries".

Searching on the indexer which is mentioned in the bucket info it says:

DatabaseDirectoryManager [838121 TcpChannelThread] - unable to check if cache_id="bid|aaaaaa~183~839799B0-6EAF-436C-B12A-2CDC010C1319|" is stable with CacheManager as it is not present in CacheManager

and

ERROR ClusterSlaveBucketHandler [838121 TcpChannelThread] - Failed to trigger replication (err='Cannot replicate remote storage enabled warm bucket, bid=aaaaaa~183~839799B0-6EAF-436C-B12A-2CDC010C1319 until it's uploaded'

what can be wrong, and what to do about it?

 

Thanks in advance

Splunk enterprise v9.0.5,  on premisse smartstore.

0 Karma

jariw
Path Finder

I also did a dbinspect on this index and searched for the bicketId. It gives below:

bucketId aaaaaa~183~839799B0-6EAF-436C-B12A-2CDC010C1319
eventCount 1660559027
eventCount 0
guId B5D4AECD-273A-4CB5-88B4-F6C5C75C3564
hostCount 0
id 183
index aaaaaa
modTime 01/30/2024:15:24:57
path /opt/splunk/data/cold/aaaaaa/db_1660559027_1659954230_183_839799B0-6EAF-436C-B12A-2CDC010C1319
rawSize 0
sizeOnDiskMB 3.078125
sourceCount 0
sourceTypeCount 0
splunk_server server1.bez.nl
startEpoch 1659954230
state cold
tsidxState full

I don't understand the fact that it says it is in cold. This index (as all on these servers) are migrated to Smartstore. so this path is wrong. Am i missing something?

And the eventcount 0? rawsize 0? but also a startEpoch and endEpoch without events?

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Introducing Splunk 10.0: Smarter, Faster, and More Powerful Than Ever

Now On Demand Whether you're managing complex deployments or looking to future-proof your data ...