Deployment server is running. Deployment client is running - confirmed that by looking at http://wiki.splunk.com/Deploy:DeploymentServer troubleshooting part.
In GUI, on deployment server, in the forwarder management I can see the forwarder with the deployment client enabled, I see the 2 apps deployed to it listed and the last Phone Home is a few seconds ago.
On the deployment client, I open the config:
~/etc/apps/Splunk_TA_nix/local/inputs.conf and lets say its InputsVersion1 file.
On the deployment server, either by opening in GUI (
server:8000/en-GB/app/Splunk_TA_nix/setup) OR by opening the file in the file system (
~/etc/apps/Splunk_TA_nix/local/inputs.conf ) I make a change, lets call now the file InputsVersion2.
Now, regardless of what I do, restart splunkd on either machine (client and/or server), wait for a few days, restart repetatively, the deployment client does not pick up the changes to the app settings from the deployment server - am I missing a step? I looked through the docs and cannot see it...
Can you please serverclass.confbsettings on deployement server.
Check alias names for client what defined
The last option restart splunk
Woodcock already stated that he restarted repeatedly. What is the alias name for the client that you're referring to?
The first thing the DS Client does whenever it finds that the app does not match the DS master copy is to disable the app so that nobody can use it while it is being updated. If DS cannot disable the app, then it also cannot update it, so DS will be (silently) deadlocked from changing the app:
allowsdisable = false # <- this prevents app from updating from DS!
Disabling the app until it can be corrected seems to be the logical way to go. The default setting according to the app.conf spec file is enable.
I'm having this issue currently. It seems to be happening with all my deployment clients. No changes are being updated. The only way things are updated is if I completely remove the app from the deployment client, at which point the app will be redeployed in full.
I assume it is a problem with the deployment server. But I can't find a setting that would seem to be causing this. Anyone have any ideas?
Let me reiterate: DO NOT USE THIS SETTING:
$SPLUNK_HOME/etc/apps/MyApp/default/app.conf: [install] allows_disable = false # <- this prevents app from updating from DS!
If you use this setting, you will see exactly what you are seeing: Once first deployed, the app can never be updated/re-deployed unless you remove the current app (or this file or this setting). The first thing the DS Client does whenever it finds that the app does not match the DS master copy is to disable the app so that nobody can use it while it is being updated. If DS cannot disable the app, then it also cannot update it, so DS will be (silently) deadlocked from changing the app.
I think this Splunk Answer answers it best:
The deployment server hashes the app. It gives that hash to the deployment client. When the deployment client checks in with the deployment server, they compare the hash that was traded previously. This is not a situation where the deployment client ever hashes the app it has been pushed to it, it is merely comparing the hash it was given by the dep. server.
This means that the only time the app will be reloaded is when it is completely missing or when the app on the deployment server is updated, thus creating a new hash, which will not match with the hash the deployment client was previously given, and thus the entire app (not just one or two files that were changed) will be redownloaded.
This is untrue; the DC definitely does recalculate the has (but I do not know on what duty cycle) because every time I have changed an DS-controlled app on a forwarder, it has immediately been restored to match DS.
Hi, I had this issue and the answers given all hold merit and should be followed/checked. However, I would like to add 1 other check, and this caught my team out for awhile. Check on the deployment server, splunkd.log for 'duplicated' GUID messages, (I can't remember the exact error message, its been a awhile!). Our problem was caused by 'designers' and jump server builds pushing same Splunk config builds on new builds.
If you do see this Error in Splunkd.log, then, (it will provide hostnames/IP Addresses) log onto those servers, stop Splunk.
Delete "/opt/splunk(forwarder)/etc/instance.cfg" - on both servers. Restart splunk, duplicated problem disappears as new GUID is created per machine, (deployment server is now happy), and any apps are then pushed down.