Having multiple indexer will help with the indexer availability, but will not solve the networking problem. You can also have Heavy Weight Forwarders installed on the same node, so you will not have networking issues anymore. And that forwarders will send data to indexers, when they are available.
The hang you are experiencing is unexpected, and I assume that it is possible that Splunk Logging Driver does not set the read timeout, and the connection is just getting disconnected from one end but does not close it on Splunk Logging Driver, so it indefinitely waits for a response. It does not seem like Splunk Logging Driver sets the ReadTimeout to the http.Client https://github.com/moby/moby/blob/master/daemon/logger/splunk/splunk.go#L223, so you can send a PR to add a timeout https://golang.org/pkg/net/http/#Client
That should solve this problem partially.
But again, I will suggest you take a look on our solution, as our log forwarding does not depend on Splunk log driver, you will write the logs in JSON, our collector tails JSON logs and forwards them to Splunk. We have a free trial for 30 days. Give a try, send us an email to sales@outcoldsolutions.com to learn more, we can schedule a call and discuss all the issues you experience.
... View more