We have 2 indexers in a cluster and are running 9.4.0
I removed an index from the cluster by removing its stanza from indexes.conf and doing a bundle push, after a rolling restart of the cluster the index wasn't visible via Settings-->Indexes on the indexers but I can still see the deleted index when I run
| rest /servicesNS/-/-/data/indexes count=0
on investigation I found a few indexes that have been removed still showing, is there any way of refreshing the data presented via the api so that it shows current indexes
Cheers
Mike
Thanks we will be restarting the indexers soon for patching I will check after
If this is bug, it's not 1st one with REST and indexes/volumes 😞
Anyhow when you are pushing a new bundle w/o those "deleted" indexes you should remember that those are not removed from peer's disks! Those are just removed from indexes.conf.
If you really want to remove and free that disk space you must remove those from OS level after you have pushed indexes.conf into those and use MC or CM to check that splunk didn't see those anymore.
Ah OK I will remove the index files from the indexers and retry the rest query may be checking the disk for index files and that would explain why I am still seeing some indexes that have been removed for quite some time
It might be memory cache, rolling restart may not flush all memory caches. Can you try reload on each member,
splunk reload index
If this also doesn't resolve the issue, try performing a full restart of the indexers.
Regards,
Prewin
If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!