I have splunk setup in multiple environments (DEV/TST/PRD) with their own SearchHead, Deployment Servers, License Servers and Indexer Clusters in Docker on Kubernetes.
The issue I'm having is registering the Indexer Clusters on the SearchHead. Each environment has a different setup and configuration for the Clusters and we need to be able to dynamically configure the docker containers via CLI to add/remove "Cluster Masters".
[clustering] mode = searchhead master_uri = clustermaster:dev [clustermaster:us-central1-dev] multisite=false pass4SymmKey = pass4SymmKey0 master_uri=https://cluster-master.splunk.dev:8089 [clustermaster:us-central1-tst] multisite=false pass4SymmKey = pass4SymmKey1 master_uri=https://cluster-master1.splunk.tst:8089 [clustermaster:us-east1-tst] multisite=false pass4SymmKey = pass4SymmKey2 master_uri=https://cluster-master2.splunk.tst:8089
What we'd like to do, and I've tried to do is:
splunk edit cluster-master clustermaster:dev -master_uri clustermaster:tst1, clustermaster:tst2
[clustering] mode = searchhead master_uri = clustermaster:tst1,clustermaster:tst2
but that doesn't really work. Maybe I'm missing something on the commandline, but I get "invalid URI" for those and it doesn't do what I want to be able to do.
Do this first:
splunk edit cluster-master clustermaster:dev -master_uri clustermaster:tst1
splunk add cluster-master clustermaster:dev -master_uri clustermaster:tst2
But you REALLY should be doing this from apps sent from Puppet or DS or something.
I don't do this via puppet, due to the fact that we're using a docker image and not a physical VM. I could configure different docker images per environment, but I'm trying to avoid that if at all possible. I do use volume mounts and the like for parts of this, but it's a bit of a balance that we're trying to sort.
master_uri clustermaster:tst1, clustermaster:tst2
those are not valid URI.
try formatting like
https://master:8089 (master is my master in my kube enviro, change accordingly)
see output of
./splunk help edit cluster-master:
'./splunk edit cluster-master https://127.0.0.1:8089 -secret newtestsecret' './splunk edit cluster-master https://old_server_name:8089 -master_uri https://new_server_name:8089'; './splunk edit cluster-master https://old_server_name:8089 -master_uri https://new_server_name:8089 -secret newsecret'
I have a similar enviro running in Kubernetes. I have chosen to do control the setup with configmaps. Have you tried that instead of touching the container from the cli?
I used the create configmaps from files options to build my apps into kube and then mount them into the instances:
mmodestino-mbp:cluster01 mmodestino$ kubectl -n splunk get configmap NAME DATA AGE k8s-cluster-fwd-local 2 13d k8s-cluster-fwd-metadata 1 13d k8s-cluster-idx-base-local 5 13d k8s-cluster-idx-base-metadata 1 13d k8s-cluster-search-base-local 2 13d k8s-cluster-search-base-metadata 1 13d k8s-master-base-local 3 13d k8s-master-base-metadata 1 13d k8s-searchcluster-base-local 1 12d k8s-searchcluster-base-metadata 1 12d
Interested to hear if this approach is useful, or if you'd rather still do it from the cli??
You're first part is what I was trying to avoid since we have additional settings in the cluster stanza's, and yes I know that the URI isn't valid, but the definition is :(.
As for your comment about config maps, that's what I'm doing right now, it just complicates things a bit and makes a bad mess of duplication in regards to our configurations. I'm trying to avoid that if I can 😞
Not sure I am following you....why do you want to use an invalid uri? the command can't run because of it...
Your goal is to provision the server.conf settings from the cli of the SH, right? or did I miss something?
There are both
./splunk add cluster-master and
./splunk edit cluster-master commands that do that...
if the goal is to not have to do this from the cli...why not just mount the proper configmap for server.conf? That is the "Kubernetes way" to accomplish this...way easier to manage and update, then having to go into the container....
On the configmaps - yeah I only use them for bootstrap configs (like this one)...want to add a new cluster? update config map and refresh the pod. all other configs are managed like the would be outside kube. I like that git provides me a repo for version control etc. Ideally this should keep things neat, not sure why it makes duplication for you?