Security

Restricting index access with srchIndexesDisallowed overwrites other group permissions

vasial
Loves-to-Learn

We have a setup where all users by default have access to all indexes. Now we have to restrict the access to a specific index and give it only to selected users

Following this discussion I found the srchIndexesDisallowed capability listed in the latest authorize.conf user manual ( 8.1.0 ), which made me extremely happy. But I'm having some problems after testing it

I have "super" group with

srchIndexesAllowed = *
srchIndexesDisallowed = indexA

and "allowA" group with
srchIndexesAllowed = indexA

What I expect to happen is:
people in the super group have access to all indexes except indexA
people in the super and allowA group have access to all indexes ( including indexA )

unfortunately it looks like the srchIndexesDisallowed in super is overwriting the srchIndexesAllowed in allowA
I've double-checked and if a user is member only of allowA they can access it

I don't imagine this is the intended behavior
I'm wondering if someone else has looked into this and figured out a solution ( not counting all the suggestions in the above linked thread )

Labels (2)
0 Karma

mattbg
Path Finder

I had a requirement to do something similar. This is what I did, but if I have the need to scale it further then I may have to revisit parts of it:

  • Create a new role user_standard that inherits from user role (and adds nothing else). Most users are assigned this role.
  • Configure srchIndexesDisallowed in user_standard role to exclude the indexes that should not be accessible to regular users
  • Create another role user_elevated that also inherits from user role but does not restrict via srchIndexesDisallowed

This seems to work for what I need while also avoiding the overhead of having to maintain permissions on specific indexes as new indexes are added.

As mentioned, if lots of permutations of index access become necessary then this will get messy. It'd be much better if we were able to layer the roles to add/remove permissions, but I can see why Splunk took the approach of most-restrictive access where data access security is concerned.

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Is it possible another config file is interfering with your settings?  Run btool to check.

splunk btool --debug authorize list role_allowA
---
If this reply helps you, Karma would be appreciated.
0 Karma

vasial
Loves-to-Learn

@richgalloway, not as far as I can see


Running it on the search-heads I get

# sudo -u splunk /opt/splunk/bin/splunk btool --debug authorize list role_allowA
/opt/splunk/etc/apps/base_searchhead_config/default/authorize.conf [role_allowA]
/opt/splunk/etc/apps/base_searchhead_config/default/authorize.conf accelerate_search = enabled
/opt/splunk/etc/apps/base_searchhead_config/default/authorize.conf cumulativeRTSrchJobsQuota = 0
/opt/splunk/etc/apps/base_searchhead_config/default/authorize.conf cumulativeSrchJobsQuota = 0
/opt/splunk/etc/apps/base_searchhead_config/default/authorize.conf dispatch_rest_to_indexers = enabled
/opt/splunk/etc/apps/base_searchhead_config/default/authorize.conf export_results_is_visible = enabled
/opt/splunk/etc/apps/base_searchhead_config/default/authorize.conf get_metadata = enabled
/opt/splunk/etc/apps/base_searchhead_config/default/authorize.conf get_typeahead = enabled
/opt/splunk/etc/apps/base_searchhead_config/default/authorize.conf input_file = enabled
/opt/splunk/etc/apps/base_searchhead_config/default/authorize.conf output_file = enabled
/opt/splunk/etc/apps/base_searchhead_config/default/authorize.conf pattern_detect = enabled
/opt/splunk/etc/apps/base_searchhead_config/default/authorize.conf request_remote_tok = enabled
/opt/splunk/etc/apps/base_searchhead_config/default/authorize.conf rest_apps_view = enabled
/opt/splunk/etc/apps/base_searchhead_config/default/authorize.conf rest_properties_get = enabled
/opt/splunk/etc/apps/base_searchhead_config/default/authorize.conf rest_properties_set = enabled
/opt/splunk/etc/system/default/authorize.conf rtSrchJobsQuota = 6
/opt/splunk/etc/system/default/authorize.conf run_collect = enabled
/opt/splunk/etc/system/default/authorize.conf run_mcollect = enabled
/opt/splunk/etc/system/default/authorize.conf schedule_rtsearch = enabled
/opt/splunk/etc/apps/base_searchhead_config/default/authorize.conf search = enabled
/opt/splunk/etc/system/default/authorize.conf srchDiskQuota = 100
/opt/splunk/etc/system/default/authorize.conf srchFilterSelecting = true
/opt/splunk/etc/apps/base_searchhead_config/default/authorize.conf srchIndexesAllowed = indexA
/opt/splunk/etc/apps/base_searchhead_config/default/authorize.conf srchIndexesDefault = indexA
/opt/splunk/etc/system/default/authorize.conf srchJobsQuota = 3
/opt/splunk/etc/apps/base_searchhead_config/default/authorize.conf srchMaxTime = 0

I get the same output from /default/authorize.conf when looking at role_super

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Users in both the super and allowA roles will have permissions that are combined from both roles.  That means they will have both srchIndexesAllowed = indexA AND srchIndexesDisallowed = indexA.  Since both can't be true at the same time, one must win out and that one is the disallow entry.

---
If this reply helps you, Karma would be appreciated.
0 Karma

vasial
Loves-to-Learn

With the current behavior I don't see how the problem raised in the mentioned discussion is resolved. I imagine srchIndexesDisallowed was introduced with that in mind

True, I can create a role that disallows indexA and then have a role with access to everything, but what happens when security indexB is introduced and a separate group of people need access only to that. It will be back to the clunky solution of listing allowed indexes for each role and updating that list every time a new index is introduced

I imagined that a list of allowed indexes would be parsed from each role and then combined to allow for more complex access management, instead of the rules overwriting each other

0 Karma

richgalloway
SplunkTrust
SplunkTrust

In Splunk, disallow ("blacklist" in other configs) trumps allow ("whitelist") so a global block will always block.  It has to be one way or the other and this is the path Splunk chose.

If indexB is created for the exclusive use by one role then all other roles must be modified to disallow access to indexB.

---
If this reply helps you, Karma would be appreciated.
0 Karma

richgalloway
SplunkTrust
SplunkTrust

Does the "allowA" role inherit from any other roles?  If so, what are the settings for those roles?

---
If this reply helps you, Karma would be appreciated.
0 Karma

vasial
Loves-to-Learn

allowA does now inherit any roles

This is how it's set-up ( copied work-in progress from another role 😞


[role_allowA]
srchIndexesAllowed = indexA
srchIndexesDefault = indexA
accelerate_search = enabled
cumulativeRTSrchJobsQuota = 0
cumulativeSrchJobsQuota = 0
dispatch_rest_to_indexers = enabled
export_results_is_visible = enabled
get_metadata = enabled
get_typeahead = enabled
input_file = enabled
output_file = enabled
pattern_detect = enabled
request_remote_tok = enabled
rest_apps_view = enabled
rest_properties_get = enabled
rest_properties_set = enabled
search = enabled
srchMaxTime = 0

The super role inherits user and is basically the same with some extra perks and the srchIndexesDisallowed=indexA

0 Karma
Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...