Getting Data In

How to forward logs from a local data center to a Splunk Enterprise Indexer in AWS

devenjarvis
Path Finder

I have been trying at this for a couple of weeks now with no luck. We have a Splunk Enterprise setup in AWS with a search head, 2 indexers, and an auto-scaled group of forwarders for cloud watch log data we are passing in. It's working great right now. We would now like to use this existing setup to consume logs from servers that sit in our own Data Center (not AWS).

My thought was to simply add a Universal Forwarder on the server of choice, throw an Elastic Loadbalancer in front of one of the Indexers in AWS (eventually we will send this data to both indexers if possible), and use Route53 in front of the Load Balancer to give it a domain for the UF to point at. The UF is set to forward to the Route53 domain on port 443. The load balancer takes traffic on port 443 and passes it to the indexer on port 9995 which we have setup as a receiving port on the indexer.

Conceptually I think this should work, and I have verified firewalls are open to allow this traffic, but 'splunk list forward-servers' on the UF reveals that the host is 'configured but inactive'. The splunkd.log file isn't especially helpful from what I can tell, the only error I see is about payload_size being too large, but searching on the answers forums reveals that this could be related to a number of network issues and none of the solutions I found seemed to be relevant/work. So, my question is how can I troubleshoot what is wrong with my networking that isn't allowing logs to be forwarded to my indexer?

All help or ideas are appreciated!

0 Karma
1 Solution

devenjarvis
Path Finder

Ironically after working on this for weeks I finally found the answer. The load balancer was set to listen on HTTP traffic not TCP. Making that switch fixed this. I apologize for the unnecessary question.

View solution in original post

devenjarvis
Path Finder

Ironically after working on this for weeks I finally found the answer. The load balancer was set to listen on HTTP traffic not TCP. Making that switch fixed this. I apologize for the unnecessary question.

ppablo
Retired

No apologies needed @devenjarvis 🙂 thanks for sharing your solution with the community to close your question out.

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...