Deployment Architecture

Using REST API load balancing in a Search Head Cluster, why are session keys not replicated?

ashleyherbert
Communicator

Hello

We have implemented a search head cluster behind an F5 LTM. I've created an LTM VIP with cookie persistence for the REST API (hitting port 8089 obviously), but we're having an issue with our REST clients.

When the REST client authenticates, it gets a sessionkey back. Then when it makes the next request passing the sessionkey, if the request hits a different Splunk server, then it gets a HTTP 401 and the response 'call not properly authenticated'.

I thought initially that the second request is hitting a different server before the session is replicated around the cluster, but I've found in my testing that the sessionkey never gets replicated at all, and I can only use that sessionkey against the one Splunk server.

Is this expected behaviour? I would expect that it would function the same way as the Splunk Web component, where the session cookies are accepted by every server in the cluster.

Is there a way I can get these sessionkeys to replicate around the cluster?

I can change the persistence to be source based, but even then there's no guarantee that the next requests will always hit the same server.

Thanks,
Ash

dstricharz
Engager

Having run into a similar issue ... you need to have your LTM or any other load balancer support persistence/stickiness for the virtual server. As the client implementation might not be able to handle cookies, a cookie based persistence configuration will cover specific scenarios, but not all - especially where the client requesting does not support cookie handling. A more generic approach would be to configure in this case source address stickiness/source ip affinity along with a valid time range the load balancer will keep the stickiness in memory. The time range should at least cover the time between initial search call, search processing, and result retrieval so that a subsequent request to fetch search results would be handled by the same server which got the initial search request.

In case your load balancer's virtual server is serving REST calls on, e.g. on port 8089, as well as your normal search traffic, e.g. on port 8000, you will get undesired load balancing effects, however, as source ip persistence applies than to your normal users, as well. Either separate into separate virtual server configs or, on LTM, use an iRule to apply different persistence styles based upon either URL prefix or destination port.

You might also need to take into consideration scenarios where your REST endpoint is reached through either a proxy or having applied source NAT. In this case, you will see only see a single or few IPs and loose at least some of your load balancing effects if source IP persistence has been the choice.

Any way, you need to configure your load balancer to have persistence/stickiness enabled on the virtual server.

0 Karma

jplumsdaine22
Influencer

Hi Ash,

Have you tried using the cookie-based authorization for the login endpoint? EG:

curl -k -u admin:changeme  https://localhost:8089/services/auth/login -d username=admin -d password=changeme -d cookie=1

You'll then have you pass the cookie in the Header of all your subsequent requests, but I assume it should be replicated the same way as webui cookies?

See http://docs.splunk.com/Documentation/Splunk/6.3.2/RESTREF/RESTaccess#auth.2Flogin

0 Karma

banderson7
Communicator

I'm having a similar issue (I think) here. https://answers.splunk.com/answers/350444/why-are-we-seeing-slow-performance-of-the-search-a.html Do you get those error messages on the distributed management console?

0 Karma

banderson7
Communicator

Mine turned out to be a bug w/ the distributed management console that's scheduled to be fixed in the next implementation. Good luck!

0 Karma
Get Updates on the Splunk Community!

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...