Getting Data In

Access Control: Is it possible to create views based on index name and/or sourcetype that we can set permissions for individually?

javiergn
SplunkTrust
SplunkTrust

Hi all,

I've got a new set of logs from one of our development teams for some in-house applications.
They have written some common libraries to log this data into Splunk. At the moment this is all going into one index but I have dynamically defined one sourcetype per application in props and transforms.conf.

Whenever they create new applications, Splunk will create new sourcetypes, so the model works great from that point of view.

But there's another problem we are facing: access controls.
We want different teams to have access to different application logs within this index. But this apps are defined by sourcetype and they all share the same index so I'm not sure how to achieve this.

I know I could create 50 different indexes for each and every single app, but in that case, whenever my dev teams create a new application, they will need to let me know so that I can create those indexes for them before they can send us any logs.

Also, managing 50 new indexes, which potentially could be 100 or 200 in a couple of years seems like a waste of time and resources.

  • Is there any concept of views based on index name and/or sourcetype that we can permission individually?
  • Is there any other way to restrict access to sourcetypes other than hardcoding "NOT sourcetype=foo" on any search executed by specific user groups?
  • Any other ideas?

Thanks,
J

1 Solution

lmyrefelt
Builder

Well i would say it is easier to manage 400 indexes and the assosicated roles with them, then try to use 40 indexes and create 400 roles that are based on "search-terms" or whatever ...

Either you got access to the index and its application or you dont .. easy -peacy

I use the for "every" service / application (where it is suitable) on app with one role which gives access to 3 indexes, dev, stage, prod. Each index has with one role attached (corresponding group in ldap) . Users get access to indexes, and automatically the app at login.

In here i then have different views and search-views into the different indexes (without type index-name or search or whatever) so the users cant get lost. ie. they just need to choose if they want to work in dev, stage or prod .

But i am sure there is better ways of doing it

View solution in original post

0 Karma

musskopf
Builder

Following mikaelbje idea, if you're running Linux you could start with something like using bash and Splunk Cli:

$ cd $SPLUNK_HOME/bin
$ src="index=_internal newindex sourcetype=splunkd component=IndexProcessor earliest=-1h@h latest=now | rex field=message \"index='(?P<missing_index>\w+)'\" | dedup missing_index | table missing_index"
$ ./splunk search "$src" -header false 2>/dev/null

The search above would return all input sent to an inexistent index on the last hour... for each output line you could run a ./splunk add index. Or you might be happy by just setting up an alert 🙂

Cheers

0 Karma

mikaelbje
Motivator

You can probably do this using the | rest command to look up the current user and its groups. You'd then need to prefix or suffix your search with this including the logic needed to filter out unwanted groups. This way you could even compare the results of the rest lookup with the results from the main search to filter based on event data.

It's far from an ideal solution but should work. It's poor security however as users can still search the data if they search manually. I suggest you use indexes instead as it doesn't add an extra layer of confusion.,You can most probably do this using the | rest command in your search to look up the current user's groups. You'd then have to code this into a macro or similar and prefix or suffix all your searches with this command and the checks it needs to perform.

It's not a pretty solution but should work. I'd recommend using index names as it doesn't add another layer of confusion.

0 Karma

lmyrefelt
Builder

Well i would say it is easier to manage 400 indexes and the assosicated roles with them, then try to use 40 indexes and create 400 roles that are based on "search-terms" or whatever ...

Either you got access to the index and its application or you dont .. easy -peacy

I use the for "every" service / application (where it is suitable) on app with one role which gives access to 3 indexes, dev, stage, prod. Each index has with one role attached (corresponding group in ldap) . Users get access to indexes, and automatically the app at login.

In here i then have different views and search-views into the different indexes (without type index-name or search or whatever) so the users cant get lost. ie. they just need to choose if they want to work in dev, stage or prod .

But i am sure there is better ways of doing it

0 Karma

javiergn
SplunkTrust
SplunkTrust

Thanks all for your answers.
My only concern regarding multiple indexes is the fact that I won't be able to easily create those new indexes dynamically when my devs write a new application.

0 Karma

lmyrefelt
Builder

You can / could easily automate it, using scripts calling different cli-commands and or the rest-api to do the stuff.

I am using a script to add new indexes , roles, serverclass (standardized) for the forwarder configuration, deploying it the indexers ... creating the search-head app, linking / wrapping config / doku page from our wiki, from an template, deploying this together with roles and what not to my search head cluster.

I then use the api and or cli (also scriptet) to update / re-read the configs to activate the new app / data-input an make this available in no-time 🙂

Next thing to do is a self-service portal for creating the data-inputs (fwd configuration) for the system-owners/devs 😉

0 Karma

lmyrefelt
Builder

no, its not on github (at least not yet)

0 Karma

musskopf
Builder

I do have some similar issuer with access control over here. At the moment the only way I found was creating multiple indexes.

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...