Deployment Architecture

What is the behaviour of Splunk ingestion when the replication factor cannot be met?

benwheeler
New Member

We have noticed both failure of ingestion, and successful ingestion when the replication factor for an index cluster has not been met.

What is the specific behaviour of Splunk when the indexing replication factor has not been met? Will content that has been ingested eventually be copied around to meet the replication factor? For instance during a restart of an indexer, if having that indexer offline makes the replication factor impossible, what does Splunk do about this?

We can't find any specific reference to this in the documentation, specifically around multisite indexing clusters. If the origin site ingests the data, but the other site does not meet the replication requirements, then will that content eventually be copied to the other site?

0 Karma
1 Solution

dxu_splunk
Splunk Employee
Splunk Employee

during a site failure, lets say site2, indexing may pause for a bit (when useAck=true) since the site2 is not going to be able to receive data (for both data being forwarded directly to site2, as well as replicated data coming from site1).

after a short time (generally some replication/hb timeout or before that), the replicated data from site1 will give up (the hot buckets will roll), and new data going into site1 will not be replicated into site2, and normal site1 ingestion will continue. (internally, we'll start making hot buckets w/o a replicated copy onto site2, so that useAck=true won't block us since its not trying to replicate to site2... when site2 comes back up, the cluster master will start up replication jobs to ensure we meet our site policies and fill in missing buckets on site2)

be aware that on startup, the cluster will wait until the required number of indexers are up per site before the cluster starts indexing. if you start up a multisite cluster but do not start up one of its' sites, indexer-clustering wont begin. see https://answers.splunk.com/answers/209141/why-am-i-getting-error-indexing-not-ready-fewer-th.html and http://docs.splunk.com/Documentation/Splunk/6.2.0/Indexer/Restartindexing

View solution in original post

0 Karma

dxu_splunk
Splunk Employee
Splunk Employee

during a site failure, lets say site2, indexing may pause for a bit (when useAck=true) since the site2 is not going to be able to receive data (for both data being forwarded directly to site2, as well as replicated data coming from site1).

after a short time (generally some replication/hb timeout or before that), the replicated data from site1 will give up (the hot buckets will roll), and new data going into site1 will not be replicated into site2, and normal site1 ingestion will continue. (internally, we'll start making hot buckets w/o a replicated copy onto site2, so that useAck=true won't block us since its not trying to replicate to site2... when site2 comes back up, the cluster master will start up replication jobs to ensure we meet our site policies and fill in missing buckets on site2)

be aware that on startup, the cluster will wait until the required number of indexers are up per site before the cluster starts indexing. if you start up a multisite cluster but do not start up one of its' sites, indexer-clustering wont begin. see https://answers.splunk.com/answers/209141/why-am-i-getting-error-indexing-not-ready-fewer-th.html and http://docs.splunk.com/Documentation/Splunk/6.2.0/Indexer/Restartindexing

0 Karma

benwheeler
New Member

@jchampagne, we currently have a two node indexer cluster (rep factor 2), but want to move over to a multisite cluster (probably with a site rep factor of origin: 2, site1: 2, site2: 2, total: 4).

Ideally we want to continue to ingest on a single site (for example site1) if the other site (site2) becomes unavailable. My reading of the documentation is that if the replication factor for site2 cannot be met, then the whole multisite cluster will stop indexing. There does not seem to be a facility to continue ingesting on site1, then replicating the new buckets over to site2 when it comes back online. Is this correct?

Also, how is the status of either meeting or not meeting the replication factor communicated around the cluster? I'm interested in how this behaves so that we can plan for node failures. Presumably this depends on how the ingestion of data is being done (we have a mixture of forwarders configured with useACK=true, modular inputs, and STOMP).

0 Karma

jchampagne_splu
Splunk Employee
Splunk Employee

@benwheeler, can you point me to the docs page you're referring to that indicates that indexing stops if replication factor isn't met? Perhaps we need to do some clarification there.

Under normal circumstances, your Universal Forwarders will load balance between the indexers you specify in their outputs.conf file. The receiving indexer will write the data locally and the replicate it to a randomly chosen peer or set of peers (depending on the replication policy). The indexers receive a list of peers they can replicate with from the Cluster Master at regular intervals. If you're utilizing indexer acknowledgement, by setting useACK=true on the Universal Forwarders, the indexer will ensure all remote copies of the data have been written before acknowledging back to the Forwarder.

The situation you're running into is that your replication factor (RF) is equal to the number of indexers in your cluster. This means that if any indexers are down, Splunk can't meet the replication policy and therefore can't acknowledge back to the forwarder. In this scenario, the forwarder will continue to send data until the read or write timeout is met (300 seconds by default). At that time, it will close the connection and try another indexer. Since none of your indexers will be able to send acknowledgements, the forward will eventually timeout all indexers.

Here is a doc that explains how Indexer Acknowledgement works: http://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Protectagainstlossofin-flightdata

So whats the solution? You've got two options:

  • Set your total RF to a number that is less than your total number of indexers.

This will allow the receiving indexer to select a different replication peer in order to meet the RF/SF and ultimately send an acknowledgement back to the forwarder.

  • Set useAck=false

This will disable indexer acknowledgement, which is not required for index clustering. During an indexer failure, the receiving indexer will continue to replicate data with existing peers. If the RF/SF are not met, the cluster master will realize this and attempt to fix-up (replicate) the necessary buckets as soon as enough peers are online to meet the RF/SF policy.
However, by disabling indexer acknowledgement, the forwarders will not wait for confirmation that their data was written to disk. You run the risk of losing data due to network problems in this scenario.

Regarding your other questions:

There does not seem to be a facility to continue ingesting on site1, then replicating the new buckets over to site2 when it comes back online.
The cluster master will automatically fix-up (replicate) all necessary buckets to meet your replication factor when peers come back online.

how is the status of either meeting or not meeting the replication factor communicated around the cluster?
The cluster master tracks the state of all buckets across the cluster. If it detects that the replication factor or search factor is not met for a bucket or set of buckets, it will begin a process called bucket fixing. Bucket fixing is basically replicating buckets to additional indexers to meet your replication factor.

http://docs.splunk.com/Documentation/Splunk/latest/Indexer/Whathappenswhenaslavenodegoesdown

0 Karma

benwheeler
New Member

Hi @jchampagne, thankyou for detailing that. I've read through the docs again, and you are right, there is no mention of indexing stopping due to not meeting the replication factor.

Although it seems from the descriptions you gave that if the replication factor is not met, then a forwarder configured with useAck=true will stop acknowledging, then stop sending new data after 300 seconds.

This is also true of the STOMP plugin, as it does not acknowledge receipt of messages from the source unless the data is correctly replicated.

Will both these inputs behave the same whether the indexer cluster is single site or multisite? For example, if a site replication factor (origin:2, site1:2, site2:2, total:4) cannot be met due to a site2 network failure, does ingestion stop on site1? We want to avoid the situation of a prolonged pause in ingestion due to the failure of one site.

Possibly there is some other way to achieve this, but I haven't come across anything yet.

0 Karma

jchampagne_splu
Splunk Employee
Splunk Employee

@benwheeler, I can't speak specifically to the STOMP modular input, as it wasn't written by Splunk. However, if you're running the MI on a Splunk forwarder and that forwarder has useAck=true, then the same limitations would apply.

This would be true whether you're using single site or multi-site replication. The way to ensure your forwarders continue to send data, even if replication factor is not being met is to set useAck=false. If you set your total replication factor to a number that is less than the total number of indexers, you should rarely be in a situation where the cluster is not able to meet the replication factor.

0 Karma

jchampagne_splu
Splunk Employee
Splunk Employee

@benwheeler can you provide some detail on what your architecture looks like? How many indexers in how many sites? What is your RF and SF set to?

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...