Thanks Rich, so we require the IFs as our on prem logs forward to them, and in the event of a failure they will cache the logs so we dont lose anything as we often cant cache them at source due to cpu and disk overhead. each IF has 1Tb of storage for this function. we have 6 HFs currently but will need to scale up in the future. So we wanted to use AWS ELBs infront of them to spread the load without having to reconfigure the UFs which live in some environments where we cant get access to the hosts to reconfigure. Plus we will have hundreds of UFs forwarding to the 6 IFs. The 6 IFs are deployed in 6 different regions currently for fail over, and UFs we want to configure to talk to only the geographically closest 2 or 3 ELBs. The IFs get regularly rolled with terraform and ansible redeploying them as code which means the IPs rotate, and if we need to scale up we can have multiple IFs behind a single DNS entry of the ELB. Surely someone must have dealt with this issue before? we are coming from ELK and we had multiple logstash collectors behind single ELBs in that set up without issue and we could scale up and down the number of logstash instances on the fly. If there is a better way to handle this please let me know. Really looking for guidance here as new to splunk. The learning curve is steep! at peak we had about 55 logstash instances running behind 6 ELBs before. TIA!
... View more