Hello guys, so I'm currently trying to set Splunk Enterprise in a cluster architecture (3 search heads and 3 indexers) on Kubernetes using the official Splunk operator and Splunk enterprise helm chart, so while trying to change the initial admin credentials on all the instances, I face the following issue where all instance will be up and ready as Kubernetes pods for except the indexers where they will not start and remain in an error phase without any logs indicating the reason for this, so the following is a snippet of my values.yaml file which is being provided for the Splunk Enterprise chart:
sva:
c3:
enabled: true
indexerClusters:
- name: idx
searchHeadClusters:
- name: shc
indexerCluster:
enabled: true
name: "idx"
replicaCount: 3
defaults:
splunk:
hec_disabled: 0
hec_enableSSL: 0
hec_token: "test"
password: "admintest"
pass4SymmKey: "test"
idxc:
secret: "test"
shc:
secret: "test"
extraEnv:
- name: SPLUNK_DEFAULTS_URL
value: "/mnt/splunk-defaults/default.yml"
Initially, I was not passing the "SPLUNK_DEFAULTS_URL", but after some debugging the "defaults" field will write in "/mnt/splunk-defaults/default.yml" only, and by default, all instances read from "/mnt/splunk-secrets/default.yml" so I had to change it, and so what happened admin password had changed on all Splunk instances to "admintest" but the issue is indexers pods would not start.
Note: I tried to change the password by providing the "SPLUNK_PASSWORD" environment variable to all instances but the same behavior.