I currently have a search head cluster, multiple indexers, and a series of forwarders. I understand how the indexers and search heads provide high availability. I just don't understand how the forwarders maintain high availability.
Is there some way to setup up a redundant place for your data and make it so if one forwarder is down, another will pick up the data and move it into Splunk?
Before my time here we had something kind of similar in active/inactive state. It was actually two syslog servers. Both servers would get the same data in the same folders/files but only one would have the forwarder running at any given time. The trick though was to put the fishbucket on a mount point and then symlink it on both servers from the normal fishbucket location.
So the failover scenario was still manual - meaning we had to start up splunk on the backup server. But when it started, it was using the same fishbucket as primary so it knew where to start reading files from.
I'm not sure how good of a solution that was but it could be an option for you. As long as the forwarders are reading from the same place and share a fishbucket, I guess it would work?
In general though, we don't worry much about HA for forwarders. We have monitoring in place to start splunk if it stops and we get a daily report (from the Deployment Monitor app) of forwarders that haven't checked in to our deployment server. So typically we can address stopped forwarders before the data rolls.
Right, but if the OS presenting that data and running the forwarder goes down... then what? Any redundancy you have in place for that contingency will also apply to the forwarder. What scenarios are left that we're trying to cover for?
Short answer is that there isn't a mechanism to have multiple forwarder instances watching the same data and sharing state on what has/has not been forwarded. But there are very few circumstances where that would be useful, as generally whatever takes out the forwarder, takes out whatever is presenting the data as well.