I am testing out the Splunk Operator Helm chart to deploy a C3 architecture Splunk instance. At the moment everything deploys without errors, My cluster manager will pull and install apps via the AppFramework config, and SmartStore is receiving data from the indexer cluster.
However, after creating ingress objects for each Splunk instance in the deployment (LM, CM, MC, SHC, IDXC) I have been able to successfully log into every WebGUI except for the indexer cluster.
The behavior I am experience is basically like getting kicked out of the GUI the second I type the username and password then hit enter. The web page refreshes and I am back at the log in screen.
I double checked that the Kubernetes secret containing the admin password is the same for all of the Splunk instances, and also intentionally typed in a bad password and got a login failed message instead of the screen refresh I get when entering the correct password.
I am not really sure how to go about troubleshooting this. I searched through the _internal index but didn't come up with a smoking gun.
Not sure what is happening with your log in attempts but in reality I highly recommend you do not enable WebGUI on any indexer. The cluster should only be managed by the CM since with the WebGUI the ability for configurations to get out of sync is a very high risk.
Thanks for the response, and I absolutely agree. Enabling the ingress was purely for testing purposes and I have no intention of doing this in the final deployment.
I was able to figure out my issue, had I thoroughly read the Splunk docs about clustering I would caught a small section that says when using an L7 load balancer you need to ensure the LB configuration utilizes sticky/persistent user sessions.
So essentially what was happening is I would get the login screen from idx-0 and when I would hit enter it would make a new get request and the LB would round robin the request to a different indexer pod which I had not logged into.