Deployment Architecture

How do you manage the content of users' Splunk apps in a Search Head Cluster?

twinspop
Influencer

Our splunk install has a (new) Search Head Cluster. Previously we were running Search Head Pooling. I'm struggling to figure out how to manage the content of users' Splunk apps. With SHP, the users had access to delete and change objects from any search head in the pool. Now, the SHC promotes artifacts from local to default, and users cannot, for example, remove an unwanted dashboard. Or update a lookup file. I've tried to force the requested changes on the Deployer, but most often the changes are not pushed out to the Search Heads in the SHC.

For a specific example, let's use the lookup file situation. On the SHP, to update the file, they would upload a new one, delete the old one, and change the perms on the new one to be "app level," promoting it into place. How can they accomplish this task in SHC?

Another example: User wanted a view removed. Since they can't do this, I went to the Deployer, deleted the XML file, and issued a splunk apply shcluster-bundle command. However, the file was not removed from SHC members. What is the proper way to do this?

1 Solution

somesoni2
Revered Legend

In SHC, you need to put all user created objects in local folder on Search Head only. In deployer, only keep/push the configuration which doesn't change. Basically this should've been the path for migration from SHP to SHC

Search Head App
      default folder ->   Deployer:shcluster/apps/appname/default
     metadata/default.meta -> Deployer:shcluster/apps/appname/metadata
   local folder -> SH:etc/apps/appname/local
   metadata/local.meta -> SH: etc/apps/appname/metadata

View solution in original post

koshyk
Super Champion

Personally, i hate the way Splunk does the Deployer to SHC member deployment. Anything we put in "local" of Deployer will be put into "default" of cluster members. This is making our consistent model incorrect, thus unable to bring TEST environment consistent with PROD. What we do may be wrong, but can bring consistency to different environments, if you are a consistency/version control maniac
- "/etc/users/.." are local to SHC . So take a backup for redundancy purpose, but don't do anything with that.
- "/etc/myapp/local" folder for app in SHC. This is where updates of app goes into. We take this config using btool and merge into the existing config of "local" in deployer (etc/shcluster/apps/myapp/local). We will re-deploy code from deployer in a weekly manner to SHC members. Thus making SHC member "default" identical as "local".
- Every major update/rebuild, we remove items from "local" of apps in SHC and merge again with deployer to bring consistency.
- This exact same deployer package is deployed to TEST systems as well to bring a consistent environment

somesoni2
Revered Legend

In SHC, you need to put all user created objects in local folder on Search Head only. In deployer, only keep/push the configuration which doesn't change. Basically this should've been the path for migration from SHP to SHC

Search Head App
      default folder ->   Deployer:shcluster/apps/appname/default
     metadata/default.meta -> Deployer:shcluster/apps/appname/metadata
   local folder -> SH:etc/apps/appname/local
   metadata/local.meta -> SH: etc/apps/appname/metadata

twinspop
Influencer

Is it up to us to find 3rd party utils to synch the files pushed directly to SHs? For example, I want to add a new stanza in transforms.conf. As far as Splunk is concerned, I need to copy that file individually to each SH and restart? Or is there a better way?

0 Karma

somesoni2
Revered Legend

If you're deploying something (through deployer) it means, for users, it's something by default provided to them and they shouldn't be changing it. Items likes transforms.conf that you (as splunk admin) are deploying shouldn't be updatable by users anyways and pushing the changes through deployer is the correct/best way.

twinspop
Influencer

Okay. Wow. As Jobe would say, "I've made a huge mistake." I assume the same holds true for user files and dirs as well? When I transferred this app from the SHP to SHC, I tarred up all the $splunk/etc/users/theAppInQuestion directories and untarred them on the Deployer in $splunk/etc/shcluster/users. Apparently this was a bad idea. The way I see it now, I should have untarred on each SH. Ug, got some cleanup to do.

0 Karma

somesoni2
Revered Legend

That is correct.. Actually I had faced the same issue but luckily I was first testing in Sandbox environment and users were able to detect it before I moved to higher environments. Actually now you'd have to merge current etc/users of SH to your tar file to ensure you have everything.

Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...