When using automatic load balancing of outputs from universal forwarders (UF), the default number of seconds a forwarder will stick with a server before redirecting outputs to another server in the pool (autoLBFrequency) is 30 seconds... I am tempted to increase the amount of time that a UF sticks to a server in order to reduce computation burden of session negotiation and to reduce the burden placed on DNS servers to resolve server names..
Does anyone have any experiences or thoughts on negative things that could possibly occur by increasing the period of time between server changes?
I have played with this setting when load balancing 12,000+ devices. The caveat here is that I was doing this on a Windows 2012 machine. What eventually happened was that it took down my Server because of too many open connections. Other than that, just off the top of my head, depending on the load and if you are setup in a clustered environment, your hardware would also play a role in terms of indexing vs searching.