All Apps and Add-ons

What config changes are need to ensure DB inputs don't run on non-search-head captain nodes.

jincy_18
Path Finder

Hi All,
We are using Splunk DB Connect 2.3 in a search head clustered environment.
While setting up a new database input, we set up the input on the DBX2 app on the deployer(not part of the search head cluster) and then do copy the splunk_app_db_connect app from "/apps/splunk/etc/apps/" to " /apps/splunk/etc/shcluster/apps"
and then run ./splunk apply shcluster-bundle -target https://searchheadclustermember:8089, to push it to search heads.

As per our understanding DB inputs run on the search head captain. But as per logs we can find "action=modular_input_not_running_on_captain", i.e. db inputs aren't always running on captain.
Does Splunk 1st queries each of the search head peer to find which one is the captain and then finally runs the input query on captain node? If not, what config changes are need to ensure inputs don't run on non-captain nodes?

Thanks ,
Jincy

0 Karma
1 Solution

adonio
Ultra Champion

Hello jincy_18,
please refer to @martin_mueller comment above.
Use Heavy Forwarder when using DBX, specially in a SHC configuration.

View solution in original post

0 Karma

adonio
Ultra Champion

Hello jincy_18,
please refer to @martin_mueller comment above.
Use Heavy Forwarder when using DBX, specially in a SHC configuration.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

As per http://docs.splunk.com/Documentation/DBX/2.3.1/DeployDBX/Architectureandperformanceconsiderations#Se... you shouldn't run inputs on a SHC:

Splunk recommends running reports (saved searches), alerts, and lookups on the search head or search head cluster captain, and running inputs and outputs from a forwarder. This is because disruptive search head restarts and reloads are more common, and scheduled or streaming bulk data movements can impact the performance of user searches. Poor user experience, reduced performance, increased configuration replication, unwanted data duplication, or even data loss can result from running inputs and outputs on search head infrastructure. Running inputs and outputs on a search head captain does not provide extra fault tolerance or enhance availability, and is not recommended.

0 Karma

jincy_18
Path Finder

Thank you Martin for your valuable inputs.

We do have a plan to move to HWF along with DBC3 but that will take some time.
Since we are currently using DBC2 in SHC, could you provide some pointers to debug mentioned issue?

Also , in the SHC we do not need to maintain the state of DB Connections and reads as even if a captain goes down, other search heads have the latest state of it and can continue processing. How do we address failover in a HWF configuration ?

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Splunk Lantern’s Guide to The Most Popular .conf25 Sessions

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Unlock What’s Next: The Splunk Cloud Platform at .conf25

In just a few days, Boston will be buzzing as the Splunk team and thousands of community members come together ...

Index This | How many sevens are there between 1 and 100?

August 2025 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...