Deployment Architecture

Why doesn't the modification propagate to the search head cluster via the deployer ?

pmerlin1
Path Finder

Hello Splunkers

 

I use the deployer to deploy config apps or add_ons on a search head cluster. This works when I want to deploy a new app or delete an app. I see that the search head cluster initiates a rolling restart after each apply-bundle command on the deployer. But when I modify a file in an app (etc/shcluster/app) and run the apply-bundle command, the modification is not propagated to the cluster. What's wrong?

Labels (2)
0 Karma
1 Solution

pmerlin1
Path Finder

Hi 

I investigate more on my case. I find why the changes were not propagate to the search head cluster. 

I install on my deployer in the apps directory,  an add-on to backup config files with scheduling. This addon has a default/app.conf file whith shclustering stanza like this :

[shclustering]
# full as local changes are pushed via deployer and we want to preserve them
deployer_push_mode = full
# lookups from deployer priority
deployer_lookups_push_mode = always_overwrite

 

This setting changes  the default push mode for all deployment with the deployer.

So I decide to remove this app from the deployer. And now all lookups are preserved and all config files in the local directory pushed by the deployer as merged in the default directory on the member cluster as expected from the documentation. 

View solution in original post

0 Karma

pmerlin1
Path Finder

Hi 

I investigate more on my case. I find why the changes were not propagate to the search head cluster. 

I install on my deployer in the apps directory,  an add-on to backup config files with scheduling. This addon has a default/app.conf file whith shclustering stanza like this :

[shclustering]
# full as local changes are pushed via deployer and we want to preserve them
deployer_push_mode = full
# lookups from deployer priority
deployer_lookups_push_mode = always_overwrite

 

This setting changes  the default push mode for all deployment with the deployer.

So I decide to remove this app from the deployer. And now all lookups are preserved and all config files in the local directory pushed by the deployer as merged in the default directory on the member cluster as expected from the documentation. 

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

this sounds like a bug!

What I have assumed is that app.conf is only for that app not for a global settings unless it has said other way! 

You should raise a support case about this and if needed ask that docs will be updated it which is the planned behaviour.

r. Ismo

0 Karma

pmerlin1
Path Finder

I don t know if it is a bug.  but from the documentation (https://docs.splunk.com/Documentation/Splunk/9.1.2/DistSearch/PropagateSHCconfigurationchanges) , I understand if you want to change behavior specifically for an app, the settings must be in the local/app.conf (app setting) not default/app.conf (global setting)

 

(optional) Set the deployer push mode for one app only in the [shclustering] stanza in $SPLUNK_HOME/etc/shcluster/apps/<app>/local/app.conf for that specific app. You might need to add the [shclustering] stanza to the app.conf file if it is not already present. For example:

Tags (1)
0 Karma

isoutamo
SplunkTrust
SplunkTrust

Nice found! How I read it is

if it is under $SPLUNK_HOME/etc/system/local  on individual SHC node / deployer(?) then it's global, but when it's under $SPLUNK_HOME/etc/shcluster/apps/<app>/ then is local for that app. It's not a dependent on default vs local inside one app!

So if it's on your $SPLUNK_HOME/etc/shcluster/apps/<app>/ then it should be only local for that one app not for all apps.

If this is depending on default vs. local folder inside any apps inside etc/shcluster/apps folder then is't not what is said on doc and it should reported as bug/error in docs.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Actually it's a bit unclear and calls for clarification indeed.

The https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles document doesn't mention the app.conf file in shcluster directory at all.

Only the https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/PropagateSHCconfigurationchanges#Set_... specifies that app.conf for a given app must be configured in app/local/app.conf

So my understanding is that global push mode is processed with normal precedence rules and can be set anywhere in the "normal" chain of config parsing and overlaying but the overwrite for a specific app must be placed in this app's local directory.

0 Karma

isoutamo
SplunkTrust
SplunkTrust

I has read about this little bit more and to be honest I couldn't found a clear answer and any reason why this has worked like that way. 

App/user configuration files said that app.conf is for user/app level only. You cannot use it for global configurations, BUT still the instructions said that you should put it into etc/system/local/app.conf to use it as a global (Set the deployer push mode). This is quite confusing! And actually that file is on deployer on path etc/shcluster/apps/<app> not in etc/apps/<app> which basically means that it's hasn't merged(/affected) with other app files when bundle has applied  (read: created) on deployer. Precedence has used only for files under etc/apps/<app> + etc/system if I have understood right.

Usually when you have created your own app you set all configurations into default not a local directory. This should haven't have any other side effect, than where it has put, when bundle has applied into SHC members. Of course also e.g. pass4SymmKeys etc. is crypted (and plain text has removed) only on those files, which are in local!

If you have some apps e.g. from splunkbase, then you should put your local changes under local directory, avoid to lost those, when you update that app to the newer version.

But it shouldn't depend that way based on where app.conf is default vs. local. If this has some side effects then it should mentioned on docs. I haven't seen any mention that default vs local has used for setting global vs. local values. It's only the precedence which those are used.

Definitely this needs some feedback to doc team.

BTW: @pmerlin1 you said that you have migrated from SH to SHC. Have you followed the instructions and use only clean and new SH as a members on this SHC not reuse (without cleaning) the old SH?

0 Karma

badrinath_itrs
Communicator

Hi @pmerlin1 ,

Can you please elaborate any specific file changes which are not getting replicated ? 

There could be a possibility that those changes might have been changed at run time, hence they are not getting updated from deployer ? 

 

Refer below document for more details. 

https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/PropagateSHCconfigurationchanges

pmerlin1
Path Finder

Hi

In fact my problem is not one, it is the normal behavior of Splunk.  Before I had a single search head and all the .conf files were in the local directory to override the default settings. When I migrated the search head into a search head cluster I kept this principle. Splunk's philosophy and best practice is that the deployer must deploy files that are not changing "locally" on the search head. These files must therefore be put in the default directory. To resolve my problem I had to move the files from local directory to the default directory then run the apply shcluster-bundle command. Now it works as expected. 

0 Karma

isoutamo
SplunkTrust
SplunkTrust
If/when you have issues with lookups (e.g. time by time you found old lookups on SHC), you should check this https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/PropagateSHCconfigurationchanges#Pres...
r. Ismo
0 Karma

PickleRick
SplunkTrust
SplunkTrust
0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...