Deployment Architecture

How to manage having deployment apps that require different indexes, but are otherwise exactly the same


I am in a situation where I need to have multiple deployment apps only because there is a need for me to have different indexes for different parts of our environment. I am trying to get ways from having multiple apps (or at least minimize). Are there any tricks I can employ to have an inputs.conf index to a specific in dev based on hostname or some other unique attribute (Preferebly clientID from deployment client.conf).

Any help is MUCH appreciated!

Esteemed Legend

The way to do this is to write the app so that EVERY search calls the same macro or eventtype. Then each user will copy this KO and keep the permissions user level so that when he runs the app, the search uses his index=myStuff setting. Each user can have a different setting but use the exact same app.

0 Karma


As well as index redirection mentioned in the reviews answer, there are two other possibilities. The simple approach is to separate the original app into two parts, the first app just has default stanza with an index= line. The second app is the original app but with all the index entries removed. You then create an app for each index and push these out to the forwarders based on your requirements. When you now push out your original app without the index configuration, the index setting will be picked up from the new index app. This approach works well when you have lots of apps to push out to forwarders, as once the index app is deployed, all further apps do not need to have the index defined. This means you can makes the additional apps generic.

The second option is more challenging and is probably not supported. You can push an app out which runs a batch file from the bin folder in the app. This then reads a file with the client name and modifies the input.conf to set the custom value for the index. This technically works as I use this method to manage a lot of forwarder settings not manageable by the conference files.but I can be challenging to debug if you don't have access to a client during testing.

With respect to the index redirection, you will need to be careful of the indexing processing rules. Generally data will only be processed once at index Time and so you have to be careful with the rules in the props.conf. For example if you already have a rule for processing data based on source, a new rule based on hostname may be ignored.

0 Karma

Splunk Employee
Splunk Employee

I am not exactly sure if I am understanding the question correctly, but it sounds like you have data where some needs to go to DEV and other data that needs to go to to PROD (for example)? If that is the case and ALL the data of a given input must go to one or the other, you can use _TCP_ROUTING, a setting configured at the input itself (inputs.conf):

## inputs.conf ##
 _TCP_ROUTING = <tcpout_group_name>,<tcpout_group_name>,<tcpout_group_name>, ...
    * Comma-separated list of tcpout group names.
    * Using this, you can selectively forward the data to specific indexer(s).
    * Specify the tcpout group the forwarder should use when forwarding the data.
      The tcpout group names are defined in outputs.conf with
    * Defaults to groups specified in "defaultGroup" in [tcpout] stanza in
    * To forward data from the "_internal" index, _TCP_ROUTING must explicitly be
      set to either "*" or a specific splunktcp target group.

If instead the specific input is monitoring a data source that has data for multiple locations, you can use a props/transforms configuration to route the data to multiple hosts, but you would need to utilize a Heavy Forwarder. The following document outlines that very well:

Finally, if you are trying to route data to specific INDEX based on what is in the data, you could use the same method as route and filter but instead route the data to an INDEX. To do this you would utilize "DEST_KEY = _MetaData:Index" instead of "DEST_KEY=_TCP_ROUTING". The following answer posts describes this in pretty good detail:

Sr. Technical Support Engineer


If you can clearly identify your logs, you could configure the same index using only one TA in evenry forwarder and then override it in the indexing phase depending by regex.

On your indexer or heavy forwarder:

 # etc/system/local/transforms.conf 
 DEST_KEY =_MetaData:Index
 REGEX = .
 FORMAT = my_new_index

 TRANSFORMS-index = overrideindex

Every way I suggest you to choose a different way to identify you logs!


0 Karma