Quick run down of my setup.
Separate indexers and search heads, indexer cluster and search head cluster (3 systems) with one extra system that does cluster mastering, deployment server, and deployer, but nothing else.
Everything seems to be working fine replicating local changes across the SH Cluster, but when I installed the DB Connect app (via pushing from the deployment server), I then had to configure the connections and identities independently on each system. Also, Inputs, lookups etc don't seem to replicate either.
I'm assuming because of things like hashing passwords, that normal local conf files can't just be blindly replicated, but some sort of replication should be configurable. Should I be setting the 3 search heads up like a resource pool? Or grabbing my local config and upstreaming it to the sh cluster deploy system? The documentation certainly suggests it can work with sh clustering, so I'm guessing there's a step I've missed.
I tried the install on test, then copy to SHDeploy, but that pretty much just fails due to the active handling of /local for passwords etc.
As McLuver says, it just doesn't seem to work on SH Clusters. That doesn't greatly worry me as I was just looking at how to do it without actually having any real purpose. I had a teeny bit of data I wanted as a lookup.
I ended up just scripting a small freetds+perl thing to grab the data and distribute as a CSV file across the search heads.
I had data coming in as "log" indexed data using DBConnectv2 on my Forwarders. But again, it was test case only, as I had no use case. It works fine on non-cluster systems.
And the possible use case I was looking into this for, I'm not allowed to connect to, so it's all moot for me.
But still worth putting up my findings for the next sucker.
It appears that it's not possible to use Splunk DB Connect v2 on a Search Head Cluster at this time. What may end up working better for you is to deploy Splunk DB Connect v2 on a different component of your Splunk architecture. You may then set up some DB inputs that write to an index. Make sure that the machine you put DB Connect on is forwarding to your indexers.
You should then be able to search the DB data just like any other data, with the Search Head Cluster.
I think the trick is to install DBConnect on a test system and fully configure it there. Test it and make sure it works.
Then take the fully configured app and put it on the deployer. Next, in the DBConnect app, replace all the hashed passwords with clear text passwords. Finally, have the deployer push it to all search heads in the cluster.
IF the deployer does not have the search heads restart after installing, do a rolling-restart on the search head cluster. This will force each search head to hash any clear text passwords. (I hope.)
I haven't actually deployed DBConnect in an SHC, but I've done something similar with other apps. Let the community know if this solution does or doesn't work!
I'm having a problem with this as well. After I push the bundle, I see my connections and identities in the GUI. However, I get username/password error on all of the connections I have set up when I try to validate them.
I tried plain text passwords in my identities.conf file, but they did not encrypt after I pushed the bundle and restarted my search heads. I also tried leaving the encrypted passwords from my test setup, and they gave the same username/password error. If I enter the password in each individual GUI, I can validate my connections. So it seems to just be something with the passwords in identities.conf on each search head.
Gack - I hope this gets fixed in the next release! My next thought is: copy the splunk.secret file to all the instances - and then you will be able to copy across the config files that have encrypted passwords in them. Following is a quote from the Splunk wiki Things I wish I knew then
Thinking about search head pooling or clustering? The splunk.secret file is important, because it helps set the encryption key used for things like SSL key files, LDAP service accounts, and so on. For systems that will need to share identical copies of files containing splunk encrypted password data, you may want to copy splunk.secret to such a system before the first time you start Splunk on it.
This should allow you to use the deployer to push the configuration across the search head cluster with the encrypted password.