Let's say I've made an action that triggers configuration replication across the SH Cluster (e.g: created a field extractions through Splunk Web).
Now let's say Splunk Web does not allow me to edit what I've done (e.g. change the owner of the field extraction) and thus oblige me to edit the configuration files manually.
After editing the files on one SH Cluster member (must I do it only on the Captain ?), what should I do to propagate the manual changes across the cluster ?
Instead of changing it on one of the search heads manually, could you just move it up to your deployer in a app and push it to your cluster?
Not really, I'm not going to ask Splunk users to come to me each time they want to create something on the search heads.
It is either/or. Either you allow GUI changes and NEVER EVER use the Deployer to push, OR you use the Deployer to push and NEVER EVER allow GUI changes.
You should not be editing back end configuration files manually on your SHC. You should be performing back end edits on the deployer, then pushing the changes to the SHC.
Please review the following documentation:
Use the deployer to distribute apps and configuration updates
How configuration changes propagate across the search head cluster
There are isolated cases where you may want to change settings in
$SPLUNK_HOME/etc/system/local/ config files, but these are rare, and generally only necessary if there is a problem in the SHC.
So, in a few words, if I do something in Splunk Web but Splunk Web does not allow me to edit what I've done I'm doomed forever ?
I feel your pain, the deployer will throw EVERYTHING in default ( ie. not editable ) . I have had some success at fixing config problems with finding the captain, editing the file, reloading ( NOT reboot or you loose the captain - debug refresh ) the captain, and force syncing all members to the captain. UGLY.
I think the best bet at this stage of search head cluster maturity is to make the rest API your friend and use the config file calls that allow updates to elements, stanzas, or the whole thing. These would be "legal" synced by raft calls to the cluster.
I'm sure it will get better with time, 6.6 just added a fix for ownership that used to kill us when someone left and their ID was no longer available ( yet they owned popular knowledge objects )
Depends on how strict you want to make your Splunk system.
(1) if you want to be very strict and end-users NOT to edit anything major, always do via Deployer. This is very good to manage and you can do development and test in similar code-base as PROD. Also the next release will ensure it will backup default etc. so fairly consistent.
(2) if you end-users are doing changes directly via UI (eg creating dashboards, Use-cases etc.) , then their changes will override whatever you do via deployer. This is because when you use deployer, the changes go into "default" directory of the SH members, but the changes via UI will be in "local"
I always consider SHC to be a pain unless you decide on which (1) or (2) option your company looks for.
Is that example scenario based on a real one? You should be able to change ownership via REST I think....which will propagate the changes. Manually modifying a conf file will not.
I haven't run into many scenarios in my clusters where i created something that Splunk didn't let me edit, so was curious how common this for you.