Hello everyone,
We have a distributed deployment of Splunk Enterprise with 3 indexers.
Recently, it has been raising Detecting bucket ID conflicts warnings:
So far I have tried :
https://splunk.my.site.com/customer/s/article/ERROR-Detecting-bucket-ID-conflicts
Tried renaming the conflicting bucket, moving DISABLED buckets out, combining these options and separately.
The warning is raised when a rolling restart is executed. When it is resolved on one indexer, at next rolling restart it is raised on the next indexer and so on in circles.
Please, advise.
What you or someone else have done before this problem started? Or have there been some infrastructure level issue?
Thanks for your reply @isoutamo
The only change I can think of is that we replaced RHEL8 with RHEL9 recently.
The nodes are in a scaling group, they were replaced one by one. Everything worked without any issues in a different environment.
Hi @ArtieZ
Please can you confirm/check two things?
1) Is the GUID on each of your indexers unique? I assume you'd have bigger problems if they werent but its worth checking. This can be found in $SPLUNK_HOME/etc/instance.cfg
2) When you remediated by renaming the conflicting buckets - did you rename all replicas of these buckets on other indexers too? If you just renamed on a single indexer then it may well replicate the original conflicting bucket back again.
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing
Thank you for your reply @livehybrid
1) Yes, they are unique
2) Yes, I thought about that, but could find only on one indexer. I had not touched the indexers for 3-4 days, and today the conflicting bucket appeared on 2 indexers, I renamed on both indexers. I'll check tomorrow again to see if it made any difference
Unfortunately, the issue is back on 2 indexers again.