The other answer misses the potential issue of a forwarder re-forwarding data that has already been seen.
In these sorts of setups you need to take a step back and consider the following;
1) If these logs are critical, consider how they are distributed between systems. A common setup may be to use a load balancer to split syslog between two servers, these can generally ping a port to check that the service is available and so if the forwarder goes offline it will send all data to a secondary server
2) Again, if they really are critical you could always setup two forwarders sending to two different indexes on the same indexer. This way you could maintain a primary and secondary copy
3) No service I have ever seen runs at 100%, or even has 100% as a reasonable SLA. Downtime should be expected at some point, normal proceedures to handle process monitoring and a forwarder going offline should already be in place. Some monitoring tools can also be configured to automatically restart a failed process. In this case I would simply accept that as a best endeavor, short of custom scripting some hideous to maintain piece of custom sticky plaster work 🙂
Bear in mind that the forwarder will just pick up where it left off once its restarted.
... View more