The apps I am working on right now are Splunk app for CEF, Splunk app for Stream and Splunk app for ThreatConnect. for starters.
Where are the conf files?
1). Many of the issues stem from the dev's building on standalone instances and forgetting that .conf's get consolidated into the default app directory to ensure user created objects in the local directory of the app are not over written on the next bundle push by the master. Some apps rely on index, props and transform .con'f that need to reside on the indexer/s, assuming you're implementing index clustering as well, or heavy forwarders if you are doing any pre-parsing of logs. This is the easiest of the issues to fix by replicating the conf files and using the deployment server to push them out. It may lead to a larger more complicated issue which i will get to a bit later in issue #3.
-ta_nix
-ta_win
Where should the data/logs go?
2).You're outputs.conf on any search head in the cluster should have been set up to not index and to forward logs to your index cluster using the [tcpout:] stanza by default but the apps output.conf may or may not be overridden depending on the type of data in the app or if there is any associated output at all. In which case the inputs.conf in the app needs to be routed to the appropriate outputs stanza using the "_TCP_ROUTING = " arg in the stanza. Changing this input stanza may lead to further issues discussed in #3.
-Add-on for Active directory
-almost all of them
Python, that is a snake right?
3). This is where it gets fun. As seen in issue #1 the location of the conf files are different in a shcluster and many dev's will handle custom api's and scripted inputs using, what else, but their own custom python scripts. This can create a situation where a script is relying on a static location for a file #1 and or more specifically a static stanza with in the conf file #2. Unfortunately both location and stanza structure has changed breaking the script. This needs to be resolved by editing the custom scripts associated with the app.
Did I just write an App?
4). Yes you did! To further this issue I have seen scripts that call the OS python at start then depend on Splunk's own python lib's causing all kinds of dependency issues. Or visa versa and the proper imports are not called when using the Splunk specific Python. Splunk's best practice says to always use splunk's prepackaged python to run all scripts and import environmentals needed at run time. Splunk has ben pretty good on keeping python current and lib's robust. But not everyone does as these change depending on stand alone and clustered environments. Bottom line it takes a lot of debug cycles and python experience to fix these types of issues.
To follow up you didn't exactly write an app but sure deserve an authors mention if not co-author. I hope this explains to a few people the difficulties in making apps "SHCluster" complaint. The dev's work is never finished and I feel for them. At the same token I believe that most users of apps that are associated with other "paid for" appliances/applications and or are paid for apps them selves are Enterprise costumer focused and thus should be developed for a clustered environment from the get go.
With this understanding I believe it would be nice to have a tag on the apps for users to know what they are getting into when implementing.
... View more