I've found some show stopping deployment server bugs and would like to know if its safe to downgrade from 6.1 to 6.0.3 on both search heads and indexers in a distributed environment?
My concern is that there are some data model related changes that might break badly with a reversion.
It "should be mostly safe" - but ... you might be happier with a dedicated 6.0.3 (6.0.4?) instance running just deployment server. This separation can make your life easier in more ways than one.
6.0.3 deployment server code is horribly broken also.
To be honest its been broken in every version of v6.x.
We are still using 5.0.4 for the deployment server because of this. Support said our bugs would be fixed in 6.0.4 (but it isn't). 6.1 introduces more deployment issues rather than less.
It is a reproducible bug that I can see the root cause of (tenants feature REMOVAL not depreciation!).
My issue is that in each main release there is no bug fix listing. In the maintenance release notes they also don't include other big fixes or feature removals (the actual cause of this issue).
In our environment we have dedicated instances for almost everything. Deployment server, edge deployment servers(deployable deployment servers), search heads, indexes, forwarders, license master, utility sos/reporting boxes. Some of them even have dual installs due to software bugs that can't be solved.
Just reverted a test instance.
A couple of notes. Delete the introspectiongeneratoraddon app. Also need to delete the filetrackingdbthresholdmb setting from system/local/limits.conf.
Why is a default install setting items under local/limits.conf instead of under default?
I have the local version of this file managed with a remote configuration manager (puppet), which should only put items that I have specifically set... I have some machines calling this file and loading it that are still running 6.0.3, and now some running 6.1.3... is it ok to run 6.1.3 without the filetrackingdb_threshold?