First I saw this banner all the time
"received event for unconfigured/disabled index=XXXX "
for the indexes _internal and also for _audit.
I found out that they were disabled (manager > indexes and $SPLUNK_HOME/etc/system/local/indexes.conf).
But when I enable them and restart, they are still disabled by splunk.
Of course because _internal is disabled I had to look in the log files, here is the errors in splunkd.log
09-20-2011 23:06:05.513 +0000 ERROR IndexProcessor - One or more indexes could not be initialized and were automatically disabled, please see splunkd.log for more details
09-20-2011 23:06:05.512 +0000 ERROR IndexProcessor - caught exception for index=internal during initialzation: 'Splunk has detected that a directory has been manually copied into its database, causing id conflicts [/opt/splunk/var/lib/splunk/internaldb/db/hotv11, /opt/splunk/var/lib/splunk/internaldb/db/db130756338313074908661].'.Disabling the index, please fix-up and run splunk enable index
I checked and there are several buckets with the same id.
What do I do now ?
Rename the directory, replacing the number 0 at the end with a number that does not conflict with the number of any existing db* folder or any hot_* folder. Do not modify other parts of the folder name.
The reason why splunk disable the indexes is because it detected a bucket collision :
2 or more of the buckets folders have the same unique ID.
( FYI, the buckets folder names format are like : hot buckets
hot _ v1 _ id and for warm or cold buckets
db _ * _ *_ id )
To fix this, you can change the id of some buckets to avoid the collision.
Look in the buckets folders, find the oldest bucket with the same id, and change the id of one of them (check the available ids first)
In this case, for the bucket id=1, keep the hot bucket, and change the warm bucket id to highest id +1
On the long run, you may want to figure why you had buckets with the same id, please verify that your files system is fine and respond fast enough to splunk, also make sure that old buckets weren't restored by a backup system, or a bad manipulation was performed (like merging several indexes folders).
This is a very common issue when you restore your buckets from a backup. Because you backup moved back a hot buckets that rotated to warm or cold since, causing a duplicate id.
It is also frequent when multiple indexes or indexers were migrated or consolidated in an existing instance.
Since Splunk 5.* and the index replication clustering, the format of the buckets names changed.
The GUID of the origin indexer is added at the end (to avoid duplicates across indexers)
will become like
(db or rd) (last event epochtime)(first events epoctime)(bucketid)(indexerGUID)
Wow, here's a lesson learned for me during my hardware upgrade/migration process: (BTW, no one consciously duplicates bucket IDs, so telling us not to do it is moot -- telling us to look for them is another story...).
I shut down my Splunk process (moving hot to warm).
Did a giant RSYNC (SCP) to the new hardware.
For unrelated issues, I had to fall back to the "old" box.
Restarted Splunk on the old box, a couple days pass...
When I was ready to try again, I shut the process down,
Did an RSYNC to the new hardware (hoping to save time in the copy process).
Bam! Duplicate bucket IDs on the new machine. But not just in the last couple of them, but at random locations. For example, even though I was in the _700 range -- I had duplicates in the areas of _320, _22, _699, etc.. Go figure...
Didn't make much sense, I had about 20 duplicate bucket IDs in total. So, just to be safe, I blew it away on the new machine and did a new SCP...
Lesson learned: don't use RSYNC -- just do one giant copy and be done.
the reason is likely that
1- first rsync the hot bucket hotv110 is copied while splunk is still running
2 - the bucket rotates to warm renamed to dbsomethingsomething_10
3 - second rsync the new warm is backed up
4 - now you have a duplicate for bucket id 10...
so the good approach is to exclude the hot buckets from the first rsync.
and sync all for the final rsync.