I have been trying at this for a couple of weeks now with no luck. We have a Splunk Enterprise setup in AWS with a search head, 2 indexers, and an auto-scaled group of forwarders for cloud watch log data we are passing in. It's working great right now. We would now like to use this existing setup to consume logs from servers that sit in our own Data Center (not AWS).
My thought was to simply add a Universal Forwarder on the server of choice, throw an Elastic Loadbalancer in front of one of the Indexers in AWS (eventually we will send this data to both indexers if possible), and use Route53 in front of the Load Balancer to give it a domain for the UF to point at. The UF is set to forward to the Route53 domain on port 443. The load balancer takes traffic on port 443 and passes it to the indexer on port 9995 which we have setup as a receiving port on the indexer.
Conceptually I think this should work, and I have verified firewalls are open to allow this traffic, but 'splunk list forward-servers' on the UF reveals that the host is 'configured but inactive'. The splunkd.log file isn't especially helpful from what I can tell, the only error I see is about payload_size being too large, but searching on the answers forums reveals that this could be related to a number of network issues and none of the solutions I found seemed to be relevant/work. So, my question is how can I troubleshoot what is wrong with my networking that isn't allowing logs to be forwarded to my indexer?
Ironically after working on this for weeks I finally found the answer. The load balancer was set to listen on HTTP traffic not TCP. Making that switch fixed this. I apologize for the unnecessary question.