I hope someone can give me hints on working with knowledge objects in a distributed environment. At the moment I am struggling with the following situation:
So how do I easily and centrally delete all the data under myapp/local/ on the SHC members? I only came up with un-deploying and re-deploying the app from the Deployer, but this causes a rolling cluster restart, and I don't want that.
You can write a script to delete the myapp/local/ directory from SHC members. You'd still need to refresh the SHC members for your file system changes to take place.
Since you just want to delete their knowledge objects I would think you could login to one SH node and delete them through the gui. Replication should take care of the rest assuming you have defined the replication port on each SH.
Yes, this works fine when the delete button is there. The problem arises when confs/dashboards are pushed from the Deployer to the default folder. In this case we need to delete on the Deployer shcluster/apps folder and then run splunk apply shcluster-bundle to delete the content from the search cluster. This is very similar to how the Deployment server works. Not terrible, just need to keep track of whether the content came from the UI or the Deployer to know how to delete.
It gets more complicated when confs/dashboards were pushed via the Deployer and then a user goes in and edits the object via the UI. This is a very common scenario for apps downloaded from Splunkbase. In this case there are two versions of the object, one in default and one in local with the user updates and Splunk does not let you delete the local version via the UI when there is another version in the default folder.
Also, I've tried via the REST API and I see the attribute "removable" set to 0 as in False.
So ultimately you're left with the choice of doing everything via the UI or everything via the Deployer, or write scripts to clean up the mess until Splunk has better options via the UI.
This is causing headaches for us as well. I presently have a shell script that can run the same delete command across our 12-node search cluster to remove files which I cannot delete via the UI, Rest API or Deployer. I'm leaning towards having a non-clustered development/staging environment where changes can be made more easily, then using git or rysnc to push changes to the Deployer, and then onto the production cluster, and locking the production cluster to read-only where it cannot be edited via the UI.