I have Splunk_TA_aws and Splunk_TA_aws_knowledgeonly deployed to a distributed splunk environment from a deployment server. On the heavy forwarders UI we have configured inputs which get placed locally on the heavy forwarders in Splunk_TA_aws/local/inputs.conf. Now I need to adjust some search time extractions so on the deployment server I have edited Splunk_TA_aws_knowledgeonly/local/props.conf with my changes. This is so when I deploy config the local inputs on the heavy forwarders are not overwritten. It is my understanding that Splunk_TA_aws_knowledgeonly/local/props.conf should be able to overwrite Splunk_TA_aws/default/props.conf given that they contain the same stanza name but my search time extractions don't seem to be working. If I however simply move the file from Splunk_TA_aws_knowledgeonly/local/props.conf to Splunk_TA_aws/local/props.conf manually on a test server the extractions begin working immediately.
What am I doing wrong?
I have been working with support for a little while now. So far we have determined that it is indeed tied to the name of the app somehow. If I rename Splunk_TA_aws_knowledgeonly/ to just knowledgeonly/ then the local/props in knowledgeonly/ overrides the default/props in Splunk_TA_aws/ as one would expect.
We believe it has something to do with app vs global context as outlined here:
(scroll down to "Summary of the effect of directories on configuration precedence")
We also tried setting the props stanza priority manually via:
priority = 5
where the priority on local is manually set to a higher number and default is manuallyl set to a lower number.
This had no effect when using the folders Splunk_TA_aws/default and Splunk_TA_aws_knowledgeonly/local.
At this point I can deploy the config I wanted to by just using a different app name and I am confident that something specific with this TA is going on. I have many other apps with the same name prefix just like this which function fine. Support will continue to investigate but at this point I am stumped.
The one app "that works" has a
default.meta file that exports
All apps AKA
global whereas the other file does not. Copy the
default.meta file from the working app to the non-working app.
Another wise response,
When I deployed the _knowledge only app I made sure to copy the metadata. As you can see they are the same in both apps:
-bash-4.2$ cat Splunk_TA_aws*/metadata/* # Application-level permissions  access = read : [ * ], write : [ admin, power ] export = system [views/setup] export = none [html/Configuration] owner = none export = none [html/Inputs] owner = none export = none # Application-level permissions  access = read : [ * ], write : [ admin, power ] export = system [views/setup] export = none [html/Configuration] owner = none export = none [html/Inputs] owner = none export = none
Are you using
Splunk Enterprise Security and not the latest version and are you searching inside of that app scope? If so then you need to modify the whitelist to allow your app to work inside of ES:
In the newest version of ES, this feature has been removed.
The app name definitely does not matter because the original is in
$SPLUNK_HOME/etc/apps/whateverapp/default/whatever.conf (default) and the replacement is in
$SPLUNK_HOME/etc/apps/whateverapp/local/whatever.conf (local) so the
whateverapp makes no difference at all. My suspicion is that you are not deploying the apps to the correct place. If these are
index-time extractions, then they must be deployed to the first full instance of splunk that handles the event (usually HF-tier or IDX-tier). If these are
search-time extractoins, then they must be deployed to the Search Heads.
Thanks for your response. I am certain that these are search-time extractions. As I said above, if I simply move the config from one app to the other on a single search head and restart everything works on that search head.