Deployment Architecture

SHC load balancer health monitoring, manual detention

brettwilliams
Path Finder

So we put a couple peers into manual detention to let them "cool off" before doing some maintenance.  Problem is, people were still getting sent to those members despite manual detention, and as a result, couldn't search.

Is there a single unauthenticated call I can make via REST from my load balancer to remove manual detention members from the pool?  If so, what is it?  Probably gonna dig through the REST API tonight and (hopefully) find it.  Someone...  beat me to it!

Labels (1)
0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

how you have configured your LB to detect that node is down? Here is example of status check from LB side to notice that SH node is down (and probably detention is working same way?).

https://community.splunk.com/t5/Deployment-Architecture/Looking-for-a-URL-API-to-configure-on-load-b...

r. Ismo

0 Karma

brettwilliams
Path Finder

Detention doesn't take down the web interface though, so a generic HTTPS monitor won't reflect manual detention.

I found the off/on answer for manual detention at https://<hostname or ip>:8089/services/shcluster/config

On an F5 LTM, HTTPS monitors support regex receive strings, so coupled with a user and role with capability list_search_head_clustering and a long-lived auth token, I think it can be done.

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Nice, if you get it to work with F5 I would like to know the exact solution!
r. Ismo

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...