We have implemented a search head cluster behind an F5 LTM. I've created an LTM VIP with cookie persistence for the REST API (hitting port 8089 obviously), but we're having an issue with our REST clients.
When the REST client authenticates, it gets a sessionkey back. Then when it makes the next request passing the sessionkey, if the request hits a different Splunk server, then it gets a HTTP 401 and the response 'call not properly authenticated'.
I thought initially that the second request is hitting a different server before the session is replicated around the cluster, but I've found in my testing that the sessionkey never gets replicated at all, and I can only use that sessionkey against the one Splunk server.
Is this expected behaviour? I would expect that it would function the same way as the Splunk Web component, where the session cookies are accepted by every server in the cluster.
Is there a way I can get these sessionkeys to replicate around the cluster?
I can change the persistence to be source based, but even then there's no guarantee that the next requests will always hit the same server.
I'm having a similar issue (I think) here. https://answers.splunk.com/answers/350444/why-are-we-seeing-slow-performance-of-the-search-a.html Do you get those error messages on the distributed management console?
Mine turned out to be a bug w/ the distributed management console that's scheduled to be fixed in the next implementation. Good luck!
Have you tried using the cookie-based authorization for the login endpoint? EG:
curl -k -u admin:changeme https://localhost:8089/services/auth/login -d username=admin -d password=changeme -d cookie=1
You'll then have you pass the cookie in the Header of all your subsequent requests, but I assume it should be replicated the same way as webui cookies?
Having run into a similar issue ... you need to have your LTM or any other load balancer support persistence/stickiness for the virtual server. As the client implementation might not be able to handle cookies, a cookie based persistence configuration will cover specific scenarios, but not all - especially where the client requesting does not support cookie handling. A more generic approach would be to configure in this case source address stickiness/source ip affinity along with a valid time range the load balancer will keep the stickiness in memory. The time range should at least cover the time between initial search call, search processing, and result retrieval so that a subsequent request to fetch search results would be handled by the same server which got the initial search request.
In case your load balancer's virtual server is serving REST calls on, e.g. on port 8089, as well as your normal search traffic, e.g. on port 8000, you will get undesired load balancing effects, however, as source ip persistence applies than to your normal users, as well. Either separate into separate virtual server configs or, on LTM, use an iRule to apply different persistence styles based upon either URL prefix or destination port.
You might also need to take into consideration scenarios where your REST endpoint is reached through either a proxy or having applied source NAT. In this case, you will see only see a single or few IPs and loose at least some of your load balancing effects if source IP persistence has been the choice.
Any way, you need to configure your load balancer to have persistence/stickiness enabled on the virtual server.