Getting Data In

How does a forwarder protect against the loss of inflight detail?

NickCorbettAt
Explorer

Hi

To frame the question, here's a cut and paste from the the Splunk manual:

If all goes well, the indexer:

  1. Receives the block of data.

  2. Parses the data.

  3. Writes the data to the file system as events (raw data and index data).

  4. Sends an acknowledgment to the forwarder.

The acknowledgment tells the forwarder
that the indexer received the data and
successfully wrote it to the file
system. Upon receiving the
acknowledgment, the forwarder releases
the block from memory.

My question is this: what happens if the acknowledgement is lost? Will the forwarder then send the same block to another indexer (assuming that you have >1 indexer). Does this mean you get duplicate data on your server?

Thanks

Nick

1 Solution

aholzel
Communicator

(probably) from the same document:

After sending a data block, the forwarder maintains a copy of the data in its wait queue until it receives an acknowledgment. In the meantime, it continues to send additional blocks as usual. If the forwarder doesn't get acknowledgment for a block within 300 seconds (by default), it closes the connection. You can change the wait time by setting the readTimeout attribute in outputs.conf.

If the forwarder is set up for auto load balancing, it then opens a connection to the next indexer in the group (if one is available) and sends the data to it. If the forwarder is not set up for auto load balancing, it attempts to open a connection to the same indexer as before and resend the data.

and:

The possibility of duplicates
It's possible for the indexer to index the same data block twice. This can happen if there's a network problem that prevents an acknowledgment from reaching the forwarder. For instance, assume the indexer receives a data block, parses it, and writes it to the file system. It then generates the acknowledgment. However, on the round-trip to the forwarder, the network goes down, so the forwarder never receives the acknowledgment. When the network comes back up, the forwarder then resends the data block, which the indexer will parse and write as if it were new data.

To deal with such a possibility, every time the forwarder resends a data block, it writes an event to its splunkd.log noting that it's a possible duplicate. The admin is responsible for using the log information to track down the duplicate data on the indexer.

source: http://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Protectagainstlossofin-flightdata

So the short answer is Yes you will have a duplicate

View solution in original post

aholzel
Communicator

(probably) from the same document:

After sending a data block, the forwarder maintains a copy of the data in its wait queue until it receives an acknowledgment. In the meantime, it continues to send additional blocks as usual. If the forwarder doesn't get acknowledgment for a block within 300 seconds (by default), it closes the connection. You can change the wait time by setting the readTimeout attribute in outputs.conf.

If the forwarder is set up for auto load balancing, it then opens a connection to the next indexer in the group (if one is available) and sends the data to it. If the forwarder is not set up for auto load balancing, it attempts to open a connection to the same indexer as before and resend the data.

and:

The possibility of duplicates
It's possible for the indexer to index the same data block twice. This can happen if there's a network problem that prevents an acknowledgment from reaching the forwarder. For instance, assume the indexer receives a data block, parses it, and writes it to the file system. It then generates the acknowledgment. However, on the round-trip to the forwarder, the network goes down, so the forwarder never receives the acknowledgment. When the network comes back up, the forwarder then resends the data block, which the indexer will parse and write as if it were new data.

To deal with such a possibility, every time the forwarder resends a data block, it writes an event to its splunkd.log noting that it's a possible duplicate. The admin is responsible for using the log information to track down the duplicate data on the indexer.

source: http://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Protectagainstlossofin-flightdata

So the short answer is Yes you will have a duplicate

Runals
Motivator

Potentially. It is my understanding in the early days of data replication new data was being written to the hot storage on the initial indexer but to the cold storage on indexer where the data was being replicated to. In many environments the cold storage on indexers is slower which meant being slower to write. The initial indexer waited until the data was replicated before sending the ack back to the forwarder. If that process was too slow the forwarder would send the block of data to another indexer.

If you are seeing or suspecting a lot of duplicate data due to this and replication isn't in the equation I'd suggest opening a ticket.

0 Karma
Get Updates on the Splunk Community!

Index This | Divide 100 by half. What do you get?

November 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...

Stay Connected: Your Guide to December Tech Talks, Office Hours, and Webinars!

❄️ Celebrate the season with our December lineup of Community Office Hours, Tech Talks, and Webinars! ...

Splunk and Fraud

Watch Now!Watch an insightful webinar where we delve into the innovative approaches to solving fraud using the ...