Deployment Architecture

Indexer Replication Failover Testing

rturk
Builder

Hello Splunkers!

I have two clustered indexers (v5.0.4) replicating buckets between them. I have been testing the failover mechanism to tick a box saying that the data is indeed searchable if an indexer fails. The testing methodology is as follows:

  1. Run a search (e.g. index=main) for a window of time in the past.
  2. Confirm the number of results returned from each splunk_server, and the total number of events returned
  3. offline one of the indexers (IDX-A)
  4. Re-run the test for the same period of time to confirm identical results are returned.
  5. ???
  6. PROFIT!!

All pretty simple right? (apart from 5 & 6 anyway).

The problem I have is with step 4 - I do not get the results that were previously returned from the now offline indexer IDX-A. I have waited for ~10 minutes with no joy. I have a rep factor & search factor of 2, and the cluster master reports that everything is hunky-dory. But as soon as I restart the splunkd process on IDX-A, I get the correct/expected number of results from IDX-B. So yes, replication works... but I'd have expected the resumption of service on IDX-A not to be a trigger/catalyst for IDX-B actually returning events previously held by IDX-A.

Is there a setting/timer I'm missing here? Happy to be pointed in the right direction!

Thanks in advance 🙂

RT

0 Karma

svasan_splunk
Splunk Employee
Splunk Employee

R.Turk,

How much data did you index? (The replication is not per-event but per some amount of data. I'm wondering if the replication never happened before the shutdown.)

Also are you sending data via forwarder or local file via monitors? If forwarders, is your forwarder auto-lbing across all the peers? If the peer goes down without replicating the data, then the forwarder should just send the data to some other peer.

But if you are indexing local files via monitors that won't happen. You should be using forwarders that have acks turned on and set to auto-lb across all those peers.

0 Karma
Get Updates on the Splunk Community!

.conf24 | Day 0

Hello Splunk Community! My name is Chris, and I'm based in Canberra, Australia's capital, and I travelled for ...

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

 (view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...