Deployment Architecture

How to configure the load balancer between users and search heads

ssankeneni
Communicator

How Can I configure the load balancer between the user and search head. I got a VIP address setup with the sticky protocol. Can any one direct me what I need to configure in splunk to get it working. What files do I need to configure to point spunk to the VIP.

0 Karma
1 Solution

Takajian
Builder

I think load balancer need to persist splunk user. Splunkweb on search head issue cookie to splunk user, so load balancer will be able to persist splunk user with cookie as sticky key. And I think you will need to setup search head pooling to share configuration on each search heads. Hope this help.

View solution in original post

0 Karma

campbellj1977
Explorer

This document is for search pooling and does not really talk to load balancing splunkweb

troywollenslege
Path Finder

You can look at search head pooling..

http://docs.splunk.com/Documentation/Splunk/5.0/Deploy/Configuresearchheadpooling

This should give you a start.

0 Karma

Takajian
Builder

I think load balancer need to persist splunk user. Splunkweb on search head issue cookie to splunk user, so load balancer will be able to persist splunk user with cookie as sticky key. And I think you will need to setup search head pooling to share configuration on each search heads. Hope this help.

0 Karma

ssankeneni
Communicator

Thanks for the reply. I have stick protocol in place but I'm not sure how I can test it or make it working by configuring it in Splunk

0 Karma
Get Updates on the Splunk Community!

Celebrating Fast Lane: 2025 Authorized Learning Partner of the Year

At .conf25, Splunk proudly recognized Fast Lane as the 2025 Authorized Learning Partner of the Year. This ...

Tech Talk Recap | Mastering Threat Hunting

Mastering Threat HuntingDive into the world of threat hunting, exploring the key differences between ...

Observability for AI Applications: Troubleshooting Latency

If you’re working with proprietary company data, you’re probably going to have a locally hosted LLM or many ...