Splunk Search

srchFilter usage in backend with multiple roles

Karthikeya
Communicator

We will create two indexes per application one for non_prod and one for prod logs in same splunk. They create 2 AD groups (np and prod). We will create indexes, roles and assign that to respective AD groups. Till here it is good. 

Now we created a single summary index for all prod indexes data and we need to give access to that index to all app teams. Being single summary index, thought of filtering it at role level using srchFilter and service field, so that to restrict one user seeing other apps summary data

Below is the role created for non-prod

[role_abc]
srchIndexesAllowed = non_prod
srchIndexesDefault = non_prod


Below is the role created for prod 

[role_xyz]
srchIndexesAllowed = prod;opco_summary
srchIndexesDefault = prod
srchFilter = (index::prod OR (index::opco_summary service::juniper-prod))


 Not sure whether to use = or :: here to work? Because in UI when I m testing it is giving warning when I give = .. but when giving :: search preview results not working. Not sure what to give?

Here my doubt is when the user with these two roles if they can search only index=non_prod if he see results or not? How this search works in backend? Is there any way to test? And few users are part of 6-8 AD groups (6-8 indexes). How this srchFilter work here? Please clarify 

Labels (1)
0 Karma

Karthikeya
Communicator

@richgalloway @PickleRick I checked in chatgpt and explored authorise.conf and thought of using below. Please check and verify and let me know will it works --

Below is the role created for non-prod.

[role_abc]

srchIndexesAllowed = non_prod

srchIndexesDefault = non_prod

SrchFilter = index = non_prod

Below is the role created for prod 

[role_xyz]

srchIndexesAllowed = prod;opco_summary

srchIndexesDefault = prod

srchFilter = (index=prod) OR (index=opco_summary AND service=juniper-prod)

Still confused on = and :: index and service both are not indexes fields hence used =.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

For a search-time field you cannot use the :: syntax.

Karthikeya
Communicator

Ok hence I given = for service and index. Hope it will work. Stanzas I have given will it work as expected or srchFilter has any behaviour that can't be defined?

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Using search-time fields in search filters for limiting user access can be easily bypassed.

Search filter(s) for role(s) generate(s) additional condition or set of conditions which is/are added to the original search. So - for example - your user searches for

index=windows

and his role has search filter for

EventID=4648

the effective search spawned is 

index=windows EventID=4648

And all seems fine and dandy - a user searches only for the given EventID. But a user can just create a calculated field assigning a static value of 4648 to all events. And all events will match the search filter and all events (even those not originally having EventID=4648) will be returned.

So search filters should not (at least not when used with search-time fields) be used as access control.

Karthikeya
Communicator

cat props.conf

[opco_sony]

TIME_PREFIX = ^

MAX_TIMESTAMP_LOOKAHEAD = 25

TIME_FORMAT = %b %d %H:%M:%S

SEDCMD-newline_remove = s/\\r\\n/\n/g

LINE_BREAKER = ([\r\n]+)[A-Z][a-z]{2}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s

SHOULD_LINEMERGE = False

TRUNCATE = 10000

 

# Leaving PUNCT enabled can impact indexing performance. Customers can

# comment this line if they need to use PUNCT (e.g. security use cases)

ANNOTATE_PUNCT = false

 

TRANSFORMS-0_fix_hostname = syslog-host

TRANSFORMS-1_extract_fqdn = f5_waf-extract_service

TRANSFORMS-2_fix_index = f5_waf-route_to_index

 

cat transforms.conf

 

# FIELD EXTRACTION USING A REGEX

[f5_waf-extract_service]

SOURCE_KEY = _raw

REGEX = Host:\s(.+)\n

FORMAT = service::$1

WRITE_META = true

 

# Routes the data to a different index-- This must be listed in a TRANSFORMS-<name> entry.

[f5_waf-route_to_index]

INGEST_EVAL = indexname=json_extract(lookup("service_indexname_mapping.csv", json_object("service", service), json_array("indexname")), "indexname"), index=if(isnotnull(indexname), if(isnotnull(index) and match(index, "_cont$"),  index, indexname), index), service:=null(), indexname:=null()

 

cat service_indexname_mapping.csv

 

service,indexname

juniper-prod,opco_juniper_prod

juniper-non-prod,opco_juniper_non_prod

 

This is the backend query to route logs from global index to seperate indexes through service name. How to make this service field as indexed field?

0 Karma

Karthikeya
Communicator

@PickleRick ok and how this will be applicable in my case? If I restrict them based on service for summary index, even if he give |stats count by service he cannot see other's services data right? What else can he do here? 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

In your case the user can define his own field which will always have the value matching that of search filter.

The simplest way to do so would be to create a calculated field

service="your service"

And if you rely in your search filter on service="your service" condition - well, that condition will be met for all events effectively rendering this part of the filter useless.

0 Karma

Karthikeya
Communicator

@PickleRick then if I do service as a indexed field.. will it solve my problem or is there any chance that this can be violated at some point? 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Wait a second. You're doing summary indexing. That means you're saving your summary data as stash sourcetype. It has nothing to do with the original sourcetype - even if your original sourcetype had service as indexed field, in the summary events it will be a normal search-time extracted field.

And generally you shouldn't fiddle with the default internal Splunk sourcetypes.

0 Karma

Karthikeya
Communicator

@PickleRick sir what can I do now? I am breaking my head. Is there no option left other than creating seperate summary index per app? If yes, can I ingest the respective summary index to same app index (appA index -- opco_appA summary index also opco_appA? 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

It's not that you can't do this or that. It's just that using search filter is not a sure method of limiting access. Noone forbids you from doing that though. Just be aware that users can bypass your "restrictions".

Also you technically can edit the built-in stash sourcetype it's just very very very not recommended to do so.

As I said before - you can index the summary back into the original index but it might not be the best idea due to - as I assume - greatly different amount of summary data vs. original data.

So the best practice is to have a separate summary index for each group you have to grant access rights separately. There are other options which are... technically possible but noone will advise them because they have their downsides and might not work properly (at least not in all cases).

Asking again and again doesn't change the fact that the proper way to go is to have separate indexes. If for some reasons you can't do that, you're left with the already described alternatives of which each has its cons.

splunklearner
Communicator

You can give in this way and test and it will some how work. but this is not secure you know.

Below is the role created for non-prod

[role_abc]

srchIndexesAllowed = non_prod

srchIndexesDefault = non_prod

srchFilter = (index=non_prod)

Below is the role created for prod

[role_xyz]

srchIndexesAllowed = prod;opco_summary

srchIndexesDefault = prod

srchFilter = (index=prod) OR (index=opco_summary AND (service=juniper-prod OR service=juniper-cont ))

I think this can help you.

0 Karma

Karthikeya
Communicator

@PickleRick will this work for me? What @splunklearner given... 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

As I said before - you _can_ use search-time fields but your users can bypass it if they know about it and know how.

Karthikeya
Communicator

@PickleRick ok got it. So the secure one will be creating seperate index for application wise. But we have nearly 500 indexes to come in overall scope and as of now we have created 100+ indexes which means 50 apps (non-prod and prod 2 indexes per app).. if I create summary indexes for these it would be more indexes again. Ideally how many indexes should be there in an environment? However we are using volumes and smartstore as well. Is it very difficult to manage these indexes in future?

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Having a lot of indexes can work against you.  It means the UI can take longer to load.  It also means indexers have to open and unzip more files.  It may also lead to more buckets for the Cluster Manager to track.

Some are tempted to create a new index for each data source.  Resist that temptation.  A new index is needed if:

1) New access requirements are needed for some data

2) New retention requirements are needed for some data

3) Data volume is high enough that searches for low-volume data in the same index is affected

---
If this reply helps you, Karma would be appreciated.
0 Karma

Karthikeya
Communicator

@richgalloway This is one of the reason I am afraid of creating dedicated summary indexes again

0 Karma

PickleRick
SplunkTrust
SplunkTrust

You're a bit stuck in choosing lesser evil.

But.

You can leverage the TERM() command. Instead of matching search-time extracted field, since stash uses key=value pairs, you can set your filter to

(index=prod) OR (index=opco_summary AND (TERM(service=juniper-prod) OR TERM(service=juniper-cont)))
0 Karma

Karthikeya
Communicator

But when I am trying to use TERM for service field, values are not returning. Service field is still there in my raw summary event. Not sure what went wrong

(index=prod) OR (index=opco_summary AND (TERM(service=JUNIPER-PROD))

 

Even checked only with summary index and term with service not working

This is my raw data for summary index -- I have extracted service from original index and given |eval service = service and then collected in summary index...

07/31/2025 04:59:56 +0000, search_name="decode query", search now 1753938000.000, info min_time=1753937100.000, info_max_time=1753938000. info_search_time=1753938000.515, uri="/wasinfkeepalive.jsp", fqdn-"p3bmm-eu.systems.uk.many-44", service="JUNIPER-PROD", vs_name="tenant/juniper/services/jsp" XXXXXX

 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Ehh... didn't notice the value was enclosed in quotes. Quotes are major breakers, TERM won't work then.

0 Karma
Get Updates on the Splunk Community!

Celebrating Fast Lane: 2025 Authorized Learning Partner of the Year

At .conf25, Splunk proudly recognized Fast Lane as the 2025 Authorized Learning Partner of the Year. This ...

Tech Talk Recap | Mastering Threat Hunting

Mastering Threat HuntingDive into the world of threat hunting, exploring the key differences between ...

Observability for AI Applications: Troubleshooting Latency

If you’re working with proprietary company data, you’re probably going to have a locally hosted LLM or many ...