I'm not sure if this is a v1.1.5 issue or a sh clustering issue in conjunction with the app. I'll also do a standalone 6.2 install and see if I get the same issue.
If the "Database connections in Splunk manager" button is clicked all I get is an error that says ""There was an error retrieving the configuration, can not process this page."
Currently as it is a fresh install there is no database.conf file it would be able to read from but still errors anyhow
So fine, i'll create a new config by clicking new.
The new window is completely empty also.
DB Connect 1.1.5 does not work with Search Head Clustering yet; it will work with Search Head Pooling, or as a standalone system indexing data for a cluster to work with.
I guessed as much.
The weird thing is that I had it working on v6.2 with the old version with shc! I checked that there was a new version with 6.2 support so I upgraded and it broke everything.
I might try a few different variations of configurations and version's to see if I can get it working again.
Last update : This does actually work with latest dbconnect and search head clustering. Not sure why it broke first time around or when doing it from a fresh install (both splunk and dbconnect) but upgrading via the old versions made it work somehow.
Old post below.
Even though its not supported i've gotten an older version to work. I'll now work through the versions again and figure out where breaks exactly. I'll then update this with how I did it (if I can remember).
Other notes : I am not sure how the scheduling via the captain behaves for db inputs.
edit: ok this works with 1.1.4 also. I just extracted the install of 1.1.4 over the top of the existing shcluster/apps/dbx directory on the deployer.
What you need to do is click on the show all settings button.
The deploy instance in a cluster will combine the local and default database.conf into a single default/database.conf file.
You will need to edit this and fix it on each instance or fix it prior to doing the shbundle command.
In each cluster member make sure you copy your entire dbx/default/database.conf to dbx/local/database.conf too. They don't stack like normal stanzas so a small "disabled = 0" won't actually make it work! (I missed this nugget of into in the docs the first time around).
edit 2: This works with the latest version 1.1.5 also using the same method above.
Just need to test the db inputs to see who runs the job now 😄
I am also facing exactly the same issue. Not able to have it work in SHC. Standalone Dev SH works fine.
SPLUNK version: Splunk 6.3.0 (build aa7d4b1ccb80)
DB Connect version: 1.2.2
I tried removing the app from Deployer and pushed the config to uninstall the app. Then, I used Deployment server to push the app on all SHs in SHC to try the standalone type of installation. Even then, the same issue.
It is a major roadblock in building the dashboards, which heavily use db queries and I am completely stuck.
Thanks in advance.
I was able to get this working as well with
Splunk Search Head Cluster 6.2.1
DB Connect 1.1.7
The only error I had was connecting to a MSSQL Database
2015-03-17 19:36:20.210 main:ERROR:Database - Error in password decryption: Given final block not properly padded
I was able to:
1. Apply the Cluster Bundle to my cluster members.
2. On one of my Search Head members I had to navigate to /opt/splunk/etc/apps/dbx/default/database.conf
3. Copy database.conf to /opt/splunk/etc/apps/dbx/local/database.conf
4. Type the password in plain text for the MSSQL Server in /opt/splunk/etc/apps/dbx/local/database.conf
5. Restart my search head.
At that point I was querying with no problem.
I will note that configuring the databases in the GUI was still broken. But this is definitely a work-around until they update the app.
You can also bypass that issue by having the same installed splunk.secret on each box then push the database.conf via the deployer to the cluster. This way the hashed password will be common on each box and will work.
For large numbers of cluster members this is the only practical way to do it. Still a pain as it requires admin intervention to make it work.