Hi. We have an indexer cluster of 4 nodes with a little over 100 hundred indexes. We've recently taken a look and the cluster manager fixup tasks and noticed a large number of fixup tasks pending over 100 days (24000) for a select few of the indexes. The majority of these tasks are for the following reasons. Received shutdown notification from peer and Cannot replicate as bucket hasn't rolled yet. For some reason these few indexes are quite low volume but have a large number of buckets. ideally i would like to clear these tasks. If we aren't precious about the data would a suitable solution be to remove the indexes from the cluster configuration, manually delete the data folders for the indexes and re enable the indexes? Or could we reduce the data size on the index/number of buckets on the index to clear out these tasks? example of one of the index configurations # staging: 0.01 GB/day, 91 days hot, 304 days cold [staging] homePath = /splunkhot/staging/db coldPath = /splunkcold/staging/colddb thawedPath = /splunkcold/staging/thaweddb maxDataSize = 200 frozenTimePeriodInSecs = 34128000 maxHotBuckets = 1 maxWarmDBCount = 300 homePath.maxDataSizeMB = 400 coldPath.maxDataSizeMB = 1000 maxTotalDataSizeMB = 1400 Thanks for any advice.
... View more