Hi,
we're migrating our v5 non-clustered environment to a v6 cluster with two peer nodes and multiple search heads. When we try to create a role via GUI we can see the local indexes but not the "cluster" indexes.
Editing the authorize.conf directly just works fine so far. But in terms of administration-friendliness we wonder whether there is any way to make the search head's GUI see any index the cluster can see.
Regards
For Search Head in the Clustered environment, when you navigate to Manager> access Control> role it will let you create role , and index for the Cluster Peer will show in the searchable indexes and default search index select box.
You can use this View to set the roles to control permission.
Note : In Splunk version 6.0 and 6.0.1, we had a Bug( SPL-74975: Distributed search head not able to retrieve the index values from indexer in Access Control), due to this bug we had issue where when you
i) create a new search head
ii) create a new role in Access control page
iii) the "Indexes searched by default" and "Indexes" should be showing the indexes in the indexer. But it is not able to show.
Just got bit by this one in a 7.0.0 clustered deployment with SHC. Adding one capability to an existing role. Saved the role and just like that, the role can no longer get search results from any of its previously allowed/default indexes.
Please note, the issue has reoccurred in Splunk 7.0 and the following bug has been raised for this matter:
SPL-145546 - in 7.x in Roles admin Indexes are for local search head only
Workaround:
Step 1) Create a local directory in the search app on the SH with the correct permissions for splunkd to access i.e.
$SPLUNK_HOME/etc/apps/search/local/data/ui/manager
Step 2) Copy an old "authentication_roles.xml" file from "$SPLUNK_HOME/etc/apps/search/default/data/ui/manager" in any 6.x version or simply download a new 6.x version of Splunk and extract the file there, then place it into the folder created in step 1.
Step 3) Refresh the SH configuration with debug refresh via the web browser:
http://:8000/en-US/debug/refresh
Step 4) Create a new role on the SH and you should see all your indexes configured on the index cluster.
Note: In the workaround provided above, there is a known issue (SPL-146171) where only 1000 indexes is displayed in the UI. If you have more than 1000 indexes, you should modify authorize.conf to add the index(es) to role(s) instead
Upgraded from 6.6 to 7.0.1; workaround did not work immediately using the one on our servers remaining from earlier version.
Debug/refresh results in this line:
Refreshing admin/remote_indexes BadRequest The following required arguments are missing: repositoryLocation.
But after downloading the xml file from the 6.6.x this did work! 🙂 The warning/remark above sounds bad but has no consequences it appears.
Upgraded from 6.5 to 7.0.2
- had same issue
The Above Fix using
Copy an old "authentication_roles.xml" file from "$SPLUNK_HOME/etc/apps/search/default/data/ui/manager"
has worked for me also.
The strange thing is the bug only occurred 1 week later after i edited some roles, the indexes suddenly got dropped from certain roles. They were working on 7.0.2 up until i made changes to roles.
Just upgraded from 6.x to 7.0.2, ran into the problem and the above workaround solved the problem.
Thanks a lot !
Yes works with 7.0.0 as well. Thanks @arowsell
Didn't work with 7.0.1 at first.
After upgrading SH to 7.1.3 worked as a charm.
Thanks @arowsell
For Search Head in the Clustered environment, when you navigate to Manager> access Control> role it will let you create role , and index for the Cluster Peer will show in the searchable indexes and default search index select box.
You can use this View to set the roles to control permission.
Note : In Splunk version 6.0 and 6.0.1, we had a Bug( SPL-74975: Distributed search head not able to retrieve the index values from indexer in Access Control), due to this bug we had issue where when you
i) create a new search head
ii) create a new role in Access control page
iii) the "Indexes searched by default" and "Indexes" should be showing the indexes in the indexer. But it is not able to show.
There's a side effect in 7.0.1 : we had before getting the work around working a strange phenomenon in place. An index users could not get any events from our index cluster - while on the other hand any splunk admin performing the same search on the same search head got results back. This still happens for one index weird enough. Will monitor this the coming days and raise a ticket if needed, but more important is any theory or answer on how this can be possible? The index is mapped to a role so it's visible for splunk and admins.
The fix did it. 6.0.3 instead of 6.0/6.0.1 and it works.
Thanks!
Did this bug just re-appear in Splunk 7? Not able to see the indexes in the role window anymore after the upgrade from 6.0.0 -> 7.0.0
I am seeing this issue after upgrading to 7.0.0 so I suspect the answer is yes. Prior to this we were on 6.2.1 and we could see all of the indexes in the distributed search group when modifying roles on the primary SH but now we only see the indexes local to that SH.
If we manually create indexes with the same name as the remotes ones on the primary SH then the permissions work just fine but otherwise our users can't use remote indexes.
You would have to manually add new index from the Search Head instance where you would be doing these admin activities (role creation/summary index searches), not for all the Search Heads. And even though there will be local index in search head, the data (if you schedule a summary indexer search on the search head) will go to Main Indexer only (that's where the cluster comes in). These dropdowns and listboxes are populated from local data so you wouldn't be seeing clustered values. Seems like a limitation.
Thanks for the quick reply.
This means, we would have to manually add a new index on each involved search head?
I see the benefits, e.g. being able to do everything via GUI, but it is still only a workaround. I would prefer not to have any indexes on the search head. Actually as a part of the cluster it should get the necessary information from the master node, shouldn't it?
The same happens while creating Schedule searches with summary indexing. You need to have a blank index created with same name as clustered index on the splunk instance where you are adding roles/summary index searches.