The app is only partially working:
Change overview dashboard:
Apps: detected that I removed a test app but not that I installed it. I tried a second one: detected that I installed it, but not that I removed it.
Users: detected that I created a user
Roles: detected that I created a role
Indexes: did not detect a new index. However it DID show up in change investigation. This is the only inconsistency I found between these two dashboards. You have not used the macro in the Overview dashboard, it's hardwired to _audit. This is easily fixed; I did so.
Saved searches: detected the deletion of the searches in the first test app (though not their installation as noted above) but not any other modifications or additions to saved searches of which I've made several.
Macros: detected changes
I had to modify ct_fields_savedsearches to remove action.email.subject and action.email.to because they don't exist in the base search for saved searches, and aren't filled in by evals. Hence any stats command relying on them won't return any results. I modified ct_report_period to -7d@ latest=now because I kept getting N/A in the top panel which I didn't like.
Eventtypes: detected new eventtype
I haven't modified any props, transforms or server classes.
Can anyone assist with getting this going properly? I'm running it on a stand-alone dev laptop and I've set the indexer and sh count to 1 and patterns to the hostname of the machine. Since some of the content is showing up correctly, the thing is obviously working somewhat. Not sure where to go next.
Also, not sure about the wisdom of sticking yet more stuff in _audit. I made a new index config_tracker.
One thing you might be running into is the searches used to capture the current state of the config files only run once a day and will need to have run once to establish a baseline date by which new/updated/deleted can be based on. There are probably any number of ways to tackle that but in part it addresses everything being "new" the first time the app is installed. It also means you won't see new items until the next day. You are right that there is a good bit of information already in _audit however the reason I chose to use that by default is you can't run the delete command on data in that index. For some environments having an immutable record for changes is a bigger deal than others.
I took note of a number of other items to address in the next version; thanks!
Found another couple of issues. I've just installed nmon and still getting no props or transforms detected at all. Picking apart ct_fields_props, I found that TZ is not returned in the base search and hence the summary fails. So I removed it from the macro and the panel is now working. Perhaps an eval is needed to set a default value.
Similarly with ct_fields_transforms, the FIELDS field wasn't set so I took that one out the macro. Searching through all the transforms.conf on my system, FIELDS is not set in any of them--so this is a correct result. Looks like you cannot assume FIELDS is present, and another eval is probably needed.
Out of curiosity what version of Splunk are you running?
I'd already changed the scheduled searches to run every 15 minutes, as I'm just testing it out--sorry forgot to point that out. This was in order to get quick results as there doesn't seem to be any other way to 'bootstrap' the app. The TA saved searches won't run in the UI. There might be a case for a setup procedure which runs everything once after installation so you can get an inventory and baseline immediately. I think people who are interested in this app would like that a lot--I know I would!
I didn't know that about the _audit index, so of course that's a good idea in some places.
This app is a great idea and sorely needed, I'll be watching out for new developments with great interest. As everyone will have noticed, mature Splunk installations start to become a spaghetti-bowl after years of tinkering and there's not much in the way of support for version control and housekeeping at the moment.
I like the idea of a bootstrap process at the initial install. The challenge is making that work in a distributed environment where you have more than one server and/or an environment where you have Splunk servers not listed as search peers of the search head where this app is installed. When you do a | rest query in Splunk on a search head the data that comes back will be from that system as well as any search peers it is connected to (indexers typically) unless you put in splunk_server=local. I'd guess that doesn't typically include other search heads - at least our environment doesn't though it is why Splunk has gone that route with the distributed management console. Another scenario is a company having multiple environments but perhaps one management search head with visibility across all. At any rate all of those factors and more went into why I created the TA as a component for this app which would then be shipped out to each component server and with the queries running the splunk_server=local 'switch'.