Getting Data In

Overriding Splunk default input.conf & adding meta tags to achieve item-level filtering of events in _* indexes

learningmode
Loves-to-Learn Everything

Hello,

We are attempting to use Splunk Cloud as a multi-tenant environment (one company, separate entities) in a single subscription.

We have in index design that isolates data to each tenant and aligns with RBAC assignments. That gets us index-level isolation for data sources that are specific to each tenant. We also stamp all non-splunk events with a tenant code so that role-based access restrictions can be used to filter returned data down to only that which matches your assigned code. This approach allows for event-level filtering from indexes where data is for ALL tenants, such as lastchanceindex.

That last set of logs we need to control access to are the underscore indexes. These events are collected based on the inputs.conf files that deploy with the HF and UF agents, which do NOT have our tenant codes associated with them.

I was looking for any feedback from the community as to what the downside might be to copying ../etc/system/default/inputs.conf into ../etc/system/local/inputs.conf and adding "meta = tenantcode::<your_tenant_code_goes_here_without_the_angled_brackets>" to each stanza. At that point, all SPLUNK events in the underscore indexes would also contain tenant codes and then we'd be able to achieve item-level filtering at that point.

Thanks in advance for any feedback and opinions!

Labels (1)
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @learningmode ,

I suppose that you know the origin of your data in terms of hostnames.

so you could try to upload in Splunk Cloud a custom add-on that overrides the index value based on the hostname.

In addition, it's a best case to have an Heavy Forwarder for each customer as a concentrator, in this case you could add your tenant field as meta.

In this case you don't need to upload the add-on in Splunk Cloud, but you could do the override job on it.

Ciao.

Giuseppe

0 Karma

learningmode
Loves-to-Learn Everything

Hi gcusello,

Thank you for your time to review this, and your quick reply! We do in fact have heavy forwarders at each tenant, so there is local control over any traffic passing through those. That said though, I am told we have over 40,000 endpoints reporting directly to the cloud , so we definitely do not know the hostnames for everything. Each tenant does have their own HEC key for those submissions, so that would be about the only way to identify and separate that traffic.

In my original posting, my intended means to make these changes was via a custom inputs.conf file on all the HFs and UFs rather than modifying the events on arrival, since the inputs file is already at the source.

Aside from the overhead and logistics of having to deploy that file to that many workstations and servers, I was wondering if there's any downside to this approach from the perspective of the Splunk agent itself. The ability to do override some/all of any portion of any .conf files is built into the design, so it seems like a custom inputs .conf wouldn't be a problem in any way. Any feedback on that approach? Thank you!

0 Karma
Get Updates on the Splunk Community!

How to Get Started with Splunk Data Management Pipeline Builders (Edge Processor & ...

If you want to gain full control over your growing data volumes, check out Splunk’s Data Management pipeline ...

Out of the Box to Up And Running - Streamlined Observability for Your Cloud ...

  Tech Talk Streamlined Observability for Your Cloud Environment Register    Out of the Box to Up And Running ...

Splunk Smartness with Brandon Sternfield | Episode 3

Hello and welcome to another episode of "Splunk Smartness," the interview series where we explore the power of ...