Getting Data In

How does a forwarder protect against the loss of inflight detail?



To frame the question, here's a cut and paste from the the Splunk manual:

If all goes well, the indexer:

  1. Receives the block of data.

  2. Parses the data.

  3. Writes the data to the file system as events (raw data and index data).

  4. Sends an acknowledgment to the forwarder.

The acknowledgment tells the forwarder
that the indexer received the data and
successfully wrote it to the file
system. Upon receiving the
acknowledgment, the forwarder releases
the block from memory.

My question is this: what happens if the acknowledgement is lost? Will the forwarder then send the same block to another indexer (assuming that you have >1 indexer). Does this mean you get duplicate data on your server?




Re: How does a forwarder protect against the loss of inflight detail?


Potentially. It is my understanding in the early days of data replication new data was being written to the hot storage on the initial indexer but to the cold storage on indexer where the data was being replicated to. In many environments the cold storage on indexers is slower which meant being slower to write. The initial indexer waited until the data was replicated before sending the ack back to the forwarder. If that process was too slow the forwarder would send the block of data to another indexer.

If you are seeing or suspecting a lot of duplicate data due to this and replication isn't in the equation I'd suggest opening a ticket.

0 Karma

Re: How does a forwarder protect against the loss of inflight detail?


(probably) from the same document:

After sending a data block, the forwarder maintains a copy of the data in its wait queue until it receives an acknowledgment. In the meantime, it continues to send additional blocks as usual. If the forwarder doesn't get acknowledgment for a block within 300 seconds (by default), it closes the connection. You can change the wait time by setting the readTimeout attribute in outputs.conf.

If the forwarder is set up for auto load balancing, it then opens a connection to the next indexer in the group (if one is available) and sends the data to it. If the forwarder is not set up for auto load balancing, it attempts to open a connection to the same indexer as before and resend the data.


The possibility of duplicates
It's possible for the indexer to index the same data block twice. This can happen if there's a network problem that prevents an acknowledgment from reaching the forwarder. For instance, assume the indexer receives a data block, parses it, and writes it to the file system. It then generates the acknowledgment. However, on the round-trip to the forwarder, the network goes down, so the forwarder never receives the acknowledgment. When the network comes back up, the forwarder then resends the data block, which the indexer will parse and write as if it were new data.

To deal with such a possibility, every time the forwarder resends a data block, it writes an event to its splunkd.log noting that it's a possible duplicate. The admin is responsible for using the log information to track down the duplicate data on the indexer.


So the short answer is Yes you will have a duplicate

View solution in original post