I have two clustered indexers (v5.0.4) replicating buckets between them. I have been testing the failover mechanism to tick a box saying that the data is indeed searchable if an indexer fails. The testing methodology is as follows:
Run a search (e.g. index=main) for a window of time in the past.
Confirm the number of results returned from each splunk_server, and the total number of events returned
offline one of the indexers (IDX-A)
Re-run the test for the same period of time to confirm identical results are returned.
All pretty simple right? (apart from 5 & 6 anyway).
The problem I have is with step 4 - I do not get the results that were previously returned from the now offline indexer IDX-A. I have waited for ~10 minutes with no joy. I have a rep factor & search factor of 2, and the cluster master reports that everything is hunky-dory. But as soon as I restart the splunkd process on IDX-A, I get the correct/expected number of results from IDX-B. So yes, replication works... but I'd have expected the resumption of service on IDX-A not to be a trigger/catalyst for IDX-B actually returning events previously held by IDX-A.
Is there a setting/timer I'm missing here? Happy to be pointed in the right direction!
How much data did you index? (The replication is not per-event but per some amount of data. I'm wondering if the replication never happened before the shutdown.)
Also are you sending data via forwarder or local file via monitors? If forwarders, is your forwarder auto-lbing across all the peers? If the peer goes down without replicating the data, then the forwarder should just send the data to some other peer.
But if you are indexing local files via monitors that won't happen. You should be using forwarders that have acks turned on and set to auto-lb across all those peers.