Getting Data In

How to forward logs from a local data center to a Splunk Enterprise Indexer in AWS

devenjarvis
Path Finder

I have been trying at this for a couple of weeks now with no luck. We have a Splunk Enterprise setup in AWS with a search head, 2 indexers, and an auto-scaled group of forwarders for cloud watch log data we are passing in. It's working great right now. We would now like to use this existing setup to consume logs from servers that sit in our own Data Center (not AWS).

My thought was to simply add a Universal Forwarder on the server of choice, throw an Elastic Loadbalancer in front of one of the Indexers in AWS (eventually we will send this data to both indexers if possible), and use Route53 in front of the Load Balancer to give it a domain for the UF to point at. The UF is set to forward to the Route53 domain on port 443. The load balancer takes traffic on port 443 and passes it to the indexer on port 9995 which we have setup as a receiving port on the indexer.

Conceptually I think this should work, and I have verified firewalls are open to allow this traffic, but 'splunk list forward-servers' on the UF reveals that the host is 'configured but inactive'. The splunkd.log file isn't especially helpful from what I can tell, the only error I see is about payload_size being too large, but searching on the answers forums reveals that this could be related to a number of network issues and none of the solutions I found seemed to be relevant/work. So, my question is how can I troubleshoot what is wrong with my networking that isn't allowing logs to be forwarded to my indexer?

All help or ideas are appreciated!

0 Karma
1 Solution

devenjarvis
Path Finder

Ironically after working on this for weeks I finally found the answer. The load balancer was set to listen on HTTP traffic not TCP. Making that switch fixed this. I apologize for the unnecessary question.

View solution in original post

devenjarvis
Path Finder

Ironically after working on this for weeks I finally found the answer. The load balancer was set to listen on HTTP traffic not TCP. Making that switch fixed this. I apologize for the unnecessary question.

ppablo
Retired

No apologies needed @devenjarvis 🙂 thanks for sharing your solution with the community to close your question out.

0 Karma
Get Updates on the Splunk Community!

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...