Deployment Architecture

After migrating from a single search head to search head clustering, why have some user saved items under search context and global gone missing?

sim_tcr
Communicator

Hello,

We were on single search head with lot of users and all of them have a lot of items saved under app context and search context. Some are private and some are global on both contexts.

I have copied $SPLUNK_HOME/etc/apps (excluding search app and any other that exist on new search heads) and $SPLUNK_HOME/etc/users from old search head to new search heads

Now some users' few saved items are missing (only Items that are saved under search context and Global are missing)

Any idea where those are stored, so I can move them over to new search heads?

Thanks,
Simon Mandy

0 Karma
1 Solution

emiller42
Motivator

Just ran into this today. The issue appears to be that content saved prior to migration is not fully recognized by clustering. (Which makes sense. It wasn't created in the cluster, so it's not fully aware of it) So the trick is to get it to create the appropriate clustering metadata. Thankfully, this is very easy:

Edit the item and save it. You don't even have to actually change anything. Just click Edit > Edit Description and then click Save on the popup. Then you should be able to modify the permissions.

View solution in original post

emiller42
Motivator

Just ran into this today. The issue appears to be that content saved prior to migration is not fully recognized by clustering. (Which makes sense. It wasn't created in the cluster, so it's not fully aware of it) So the trick is to get it to create the appropriate clustering metadata. Thankfully, this is very easy:

Edit the item and save it. You don't even have to actually change anything. Just click Edit > Edit Description and then click Save on the popup. Then you should be able to modify the permissions.

sim_tcr
Communicator

This worked.
Another alternative is to clone the existing item and change permission on the clone and delete the original (This is what splunk suggested doing when we opened a case to them).

emiller42
Motivator

Good info!

0 Karma

harsmarvania57
SplunkTrust
SplunkTrust

Hi,

If you have local directory on old search head in search app then copy all those content from old search head search app to new search head search app.

Hope this helps.

sim_tcr
Communicator

I moved /Splunk/splunk/etc/apps/search/local content to new clusters and all missing items reappeared.
Now i am trying to make a report Global in new clustered splunk and i get below error.

"Splunk could not update permissions for resource saved/searches [HTTP 500] Splunkd internal error; [{'text': "\n In handler 'savedsearch': Type = savedsearches, Context = (user: xxxxxxx, app: search, root: /apps/splunk/etc), Acting as = xxxxxxx: Replication-related issue: Cannot move asset lacking a pre-existing asset ID: /xxxxxxx/search/savedsearches/FENS JTRIGGER SIT ENV", 'code': None, 'type': 'ERROR'}]

Can any one help?

0 Karma

harsmarvania57
SplunkTrust
SplunkTrust

Have you copied local directory from old search head search app to Deployer and then pushed bundle from Deployer to Search Heads?

0 Karma

sim_tcr
Communicator

No.

I just copied the contents of /Splunk/splunk/etc/apps/search/local to new search heads manually and recycled splunk.

0 Karma

harsmarvania57
SplunkTrust
SplunkTrust

1.) I think best way is to copy those contents to Deployer and then push it to search head.
2.) Can you please check file permission as well for those contents on new search head

0 Karma

sim_tcr
Communicator

Let me understand it right.
Are you asking me to create an app called search under /apps/splunk/etc/shcluster/apps on my deployment server and keep entire local folder from old splunk server there? And then push the bundle.
If i do it, on the search heads, /apps/splunk/etc/apps/search/default get overwritten with contents of /apps/splunk/etc/shcluster/apps/search/local from the deployment server.

[splunk@xxxx search]$ pwd
/apps/splunk/etc/apps/search
[splunk@xxxx search]$ ls -la | grep local
drwxrwxr-x  3 splunk splunk 4096 Mar 26 04:43 local
[splunk@xxxxxx local]$ ls -la
total 308
drwxrwxr-x  3 splunk splunk   4096 Mar 26 04:43 .
drwx------ 10 splunk splunk   4096 Mar 24 07:35 ..
-rw-rw-r--  1 splunk splunk   4759 Mar 26 02:06 commands.conf
drwxrwxr-x  3 splunk splunk   4096 Mar 24 09:02 data
-rw-------  1 splunk splunk   7062 Mar 26 02:07 macros.conf
-rw-r--r--  1 splunk splunk   5189 Mar 16 09:21 props.conf
-rw-------  1 splunk splunk  49499 Mar 24 11:28 savedsearches.conf
0 Karma

harsmarvania57
SplunkTrust
SplunkTrust

Ah sorry it's search app, so no need to create on Deployer.

0 Karma

harsmarvania57
SplunkTrust
SplunkTrust

It's related to permission issue I think, you need to copy local.meta from old search head to new search head because all permission related to Saved search at App level stores in local.meta.

If you already have local.meta on new search head then you need to append old local.meta to new local.meta on new search head.

0 Karma

sim_tcr
Communicator

when migrated all user dir from old splunk to new splunk servers (using deployment server), /apps/splunk/etc/users/axxxxxx/search/metadata/local.meta from old server become /apps/splunk/etc/users/axxxxxx/search/metadata/default.meta on the new servers.

So I manually copied /apps/splunk/etc/users/axxxxxx/search/metadata/local.meta from old servers to new servers and tried. Still no go. Same error,

"Splunk could not update permissions for resource saved/searches [HTTP 500] Splunkd internal error; [{'text': "\n In handler 'savedsearch': Type = savedsearches, Context = (user: xxxxxxx, app: search, root: /apps/splunk/etc), Acting as = xxxxxxx: Replication-related issue: Cannot move asset lacking a pre-existing asset ID: /xxxxxxx/search/savedsearches/FENS JTRIGGER SIT ENV", 'code': None, 'type': 'ERROR'}]
0 Karma

harsmarvania57
SplunkTrust
SplunkTrust

I am talking about app local.meta on search app ../etc/apps/search/metadata/local.meta, copy this old local.meta from search app from old search head to new search head.

If you already have local.meta on new search head in search app then you need to append old local.meta to new local.meta on new search head.

0 Karma

sim_tcr
Communicator

That was tried already.

0 Karma

harsmarvania57
SplunkTrust
SplunkTrust

This is my last try to solve your problem 😛

As per splunk documentation, if user wants to change permission of their private searches to app level below are the exceptions while migrating from search head pool to search head clustering. I think same exceptions applied while migrating from standalone search head to search head clustaering.

http://docs.splunk.com/Documentation/Splunk/6.2.2/DistSearch/Migratefromsearchheadpooling#Migrated_s...

Migrated settings get placed in default directories
The deployer puts all migrated settings into default directories on the cluster members. This includes any runtime changes that were made while the apps were running on the search head pool.

Because users cannot change settings in default directories, this means that users cannot perform certain runtime operations on migrated entities:

Delete. Users cannot delete any migrated entities.
Move. Users cannot move these settings from one app to another.
Change sharing level. Users cannot change sharing levels. For example, a user cannot change sharing from private to app-level.
Users can override existing attributes by editing entities in place. Runtime changes get put in the local directories on the cluster members. Local directories override default directories, so the changes override the default settings.

0 Karma

sim_tcr
Communicator

So as per this, whatever behavior i see is expected?
So can below be the work around?

Before migration, on old spunk search head make all items as public and then move them to new search head clusters. Once they are on the new search head clusters, make them back to private. 
0 Karma

harsmarvania57
SplunkTrust
SplunkTrust

Not sure, because app level searches in app local directory will move to app default directory from Deployer. So you have to try this first in test environment if you have test environment otherwise file a case with Splunk Support.

0 Karma

sim_tcr
Communicator

It did work if they are Global.
I have opened a case any way.

0 Karma

Paul1896
Path Finder

Hello @sim_tcr , did you get any response for your opened case? We have the same problem after migration and wanna solve it.

0 Karma
Get Updates on the Splunk Community!

Improve Your Security Posture

Watch NowImprove Your Security PostureCustomers are at the center of everything we do at Splunk and security ...

Maximize the Value from Microsoft Defender with Splunk

 Watch NowJoin Splunk and Sens Consulting for this Security Edition Tech TalkWho should attend:  Security ...

This Week's Community Digest - Splunk Community Happenings [6.27.22]

Get the latest news and updates from the Splunk Community here! News From Splunk Answers ✍️ Splunk Answers is ...