Splunk Enterprise

Too many UFs and too many ports for too many indexers as we scale- Are there any alternatives for load balancing across?

justinhaynes
Loves-to-Learn

Though not an emergency yet, I am hoping to make a decision on one of the two following options soon:

1. Double down on the current strategy of adding indexers in a cluster behind a load balancer and assigning an external port for each additional indexer

2. Define and pursue an alternative which would allow for adding indexers in our indexer cluster and UFs without having to resolve connectivity challenges associated with not having all futuer ports allowed across the entire enterprise.

First I'll describe our situation for context and then I'll ask the question:

The situation at our large client is that there are 10s of 1000s of Universal Forwarders across the enterprise and not all parts of the networks(s) allow connectivity to our port range. For the sake of conversation let's say the port range is  10000-10019 on 2 IP addresses: 1 for a test environment and 1 for a prod environment. Prod is the main concern here as we will not be adding indexers to the test environment. 

Though we don't have 20 indexers yet, that would be a reasonable upper limit for currently anticipated scope. For the sake of the question, let's say we have 8 indexers.

Each port externally maps like this:

Prod.address:10000 - idx01:9996

prod.address:10001 - idx02:9996 

etc... for a total of 8 in production.

 

However, earlier in the deployment there were only 4 indexers. Perhaps not always were firewall requests put in to consistently open all 20 ports instead of only the 4 which were online at that time.  Firewall teams like to be able to test to verify connectivity rahter than to allow future needed connectivity and thereby save themselves the trouble of the imminent 70,000+ firewall requests which will be needed to open up 20 ports across as many hosts... (And perhaps it would be better to simply hvae this allowed across the enterprise as part of my Option #1 above)

My understanding is that Option 2 is not an option, because any strategy for presenting only the following would preclude the UFs being able to have their special conversation with each indexer which is a part of Splunk's own particular way to balance load.  e.g.:

prod.ip:9996 :  round-robin TCP (or whatever makes sense): idx1:9666, idx2:9996, idx3, 9996....

Short of redoing absolutely everything and moving to Heavy Forwarders behind a load balancer, I believe there is not another way of doing this.

The biggest impact of moving to Heavy Forwarders would be having to re-onboard 1000s of custom applications in addition to planning cutover from one of collecting logs to the other in waves of applications.

So my question is, are there any alternatives for load balancing across multiple indexers which would allow us to use only one of our existing ports?

 

Thanks!

 

Labels (3)
0 Karma
Get Updates on the Splunk Community!

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...