Splunk Cloud Platform

How to fix locked knowledge object local/default conflict?

ianpearl
Engager

I have knowledge objects in my custom apps which are created & managed in /default by manually uploading to splunkcloud and installing. this causes me a couple of problems :

1. even though they have write perms in default.meta for sc_admin only, users with other roles can change the knowledge objects through the ui - for example they can disable a savedsearch. presumably this creates a new copy in /local which means that my perms from default.meta no longer apply because new perms are written in local.meta. am i correct in my assessment, and if so what is the point of write perms?

2. once the user has created a /local copy of the savedsearch by changing or disabling it, there is a lock or conflict situation.... the ui /local version always gets precedence, and because there is also a version in /default i can no longer see a delete option for the ui version. so i am stuck with the ui version forever. in other words, the person with zero perms wins over the sc_admin.

The only ways I have found to get out of this situation are (a) ask splunk cloudops to delete the files from /local, which takes 3 days, or (b) to rename all of the savedsearches in /default, upload and install the app, manually delete the versions that the user created in the ui, name the /default versions back again, and upload / install the app a 2nd time. 

Am i missing something in terms of a better way to rectify things when this happens and why this might be the correct splunk behaviour?

Thanks in advance

Ian

Labels (2)

woodcock
Esteemed Legend

Let me comment/correct...

> I have knowledge objects in my custom apps which are created & managed in /default
> by manually uploading to splunkcloud and installing. this causes me a couple of problems :
>
> 1. even though they have write perms in default.meta for sc_admin only,
> users with other roles can change the knowledge objects through the ui -
> for example they can disable a savedsearch.
> presumably this creates a new copy in /local

INCORRECT.  It does wrote to 'local' but it only creates a stanza header with the saved search name the disable setting like this:

[SavedSearcNameHere]
disabled=true

> which means that my perms from default.meta no longer apply
> because new perms are written in local.meta.
> am i correct in my assessment, and if so what is the point of write perms?

INCORRECT.  The perms persist as-is, UNLESS some other deliberate action is mace to change them.

> 2. once the user has created a /local copy of the savedsearch by changing or disabling it,
> there is a lock or conflict situation.... the ui /local version always gets precedence,
> and because there is also a version in /default i can no longer see a delete option for the ui version.
> so i am stuck with the ui version forever.
> in other words, the person with zero perms wins over the sc_admin.

COMPLETELY INCORRECT.  The problem that you are having is that the GUI does not allow anything to be deleted that is in "default".  In your case, if you have a saved search called "SavedSearchFOO" defined in "default", the GUI *will* show you a "delete" option in the "edit" menu but if you click on it, you will get an error like "This saved search failed to handle removal request due to Object id=SavedSearchFOO cannot be deleted in config=savedsearches."

> The only ways I have found to get out of this situation are
> (a) ask splunk cloudops to delete the files from /local, which takes 3 days, or
> (b) to rename all of the savedsearches in /default, upload and install the app,
> manually delete the versions that the user created in the ui,
> name the /default versions back again, and upload / install the app a 2nd time. 

That works.

> Am i missing something in terms of a better way to rectify things
> when this happens and why this might be the correct splunk behaviour?

Some KOs handle this better in that if the original is in default and somebody edits it and the edit goes into local, a "delete" will delete the details in "local" and revert back to the original in "default" but this does not work for "savedsearches.conf".   What I would do is make permissions so that NOBODY can edit the stuff.  This will force users to CLONE it to change it and make it easier to realize that he should involve an admin to get it reviewed/approved and folded back into the original app/name/folder.

0 Karma

TheWoodRanger
Explorer

Bumping this post - with the newer ACS app management libraries, pushing apps from an orchestration utility like Ansible to SplCloud is becoming more accessible and pervasive - these apps must be pushed with /default/ consolidated configurations, so this conflict of permissions + the user's local version precedence behavior is a concern.

 

@ianpearl As a workaround to your problem, I would try running a API call for the affected /local/ savedsearch object path to either update the sharing ("separating" the local and default versions again) or to delete the object for that specific path.

Try looking at the output of:
`| rest /servicesNS/-/-/configs/conf-savedsearches splunk_server=local | search title="<search title>" | table title, id, eai:acl.removable, eai:acl.owner`

to see whether there are multiple results returned - indicating two distinct "id" references.

Splunk is supposed to be designed in a way that won't allow duplicate id values for a given object - it's why I can create a duplicated object name that's private, but I can't share it globally if there's another one already shared globally (confirmed behavior on a field alias object in v9.0.4) 

Theoretically, there should be a unique ID route to the "local" version of that object, likely something like:

https://.../servicesNS/appname/USER/saved/searches/<search name>

which you could make an API call to for removal without touching the /default/ version. 

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...