Deployment Architecture

DB inputs only running on Search Head Cluster Captain

aaronbarry73
Path Finder

Hi all,

I built a dedicated Search Head Cluster with 3 members and a deployer to load and test how DB Connect works in a shcluster.  Splunk Enterprise 9.1.2 and DB Connect 3.15.1.  The configs replicate fine across the members and I am running several inputs.  It appears that all of the inputs so far are running on the captain only.  I am wondering if this is normal behavior, and if the captain will start distributing input jobs to other members once it is maxed out?

I am running this search to see the input jobs:

index=_internal sourcetype=dbx_job_metrics connection=* host IN (abclx1001,abclx1002,abclx1003)
| table _time host connection input_name db_read_time status start_time end_time duration read_count write_count error_count
| sort - _time

All inputs are successful, and the host field is always the same - it is the captain.

The other members give me messages like this:

2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 74 : Run DB Input name=test_db_input took 0.045278310775756836 s
2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 66 : Input was run on other node status=303 content=b'Ignoring input request as other node is the captain'
2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 51 : Run DB Input name=test_db_input
127.0.0.1 - - [11/Dec/2023:23:40:00 +0000] "POST /api/inputs/test_db_input/run HTTP/1.1" 303 51 "-" "python-requests/2.25.0" 41
2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 45 : action=send_run_input_request
2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 74 : Run DB Input name=test_db_input took 0.04212641716003418 s
2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 66 : Input was run on other node status=303 content=b'Ignoring input request as other node is the captain'
127.0.0.1 - - [11/Dec/2023:23:40:00 +0000] "POST /api/inputs/test_db_input/run HTTP/1.1" 303 51 "-" "python-requests/2.25.0" 38
2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 51 : Run DB Input name=test_db_input
2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 45 : action=send_run_input_request

Thoughts?  Is the shc supposed to distributed these inputs the way it would distribute scheduled searches?

Labels (1)
0 Karma
1 Solution

aaronbarry73
Path Finder

I opened a case with Splunk and they reviewed and replied that all of the DB Inputs running on the SHC Captain is expected behavior.  Here's an excerpt from their findings:

"In review and consultation with other colleagues I believe I may have found an answer. It is located in the documentation: 

 
Specifically, the 

Deploy DB Connect on search head clusters

section.
  • DB Connect provides high availability on Splunk Enterprise with a Search Head Cluster, by executing input/output on the captain.
Essentially, this is saying that in Splunk this is normal "expected behavior" and can be treated as such."
 
To me this means that DB Inputs and DB Outputs on the Search Head Cluster will be limited by the hardware (CPU / Memory) of the captain, so you have to be careful with this.
The benefit of DB Connect on an SHC is the replication of identity and connection configs across the cluster members.  Rather than using DB Connect to configure and run the Input/output jobs, I recommend creating scheduled searches that run the dbxquery command.  This way, the query jobs are distributed by the captain to all of the members of the cluster.  I am testing this on my SHC with positive results!

View solution in original post

0 Karma

aaronbarry73
Path Finder

I opened a case with Splunk and they reviewed and replied that all of the DB Inputs running on the SHC Captain is expected behavior.  Here's an excerpt from their findings:

"In review and consultation with other colleagues I believe I may have found an answer. It is located in the documentation: 

 
Specifically, the 

Deploy DB Connect on search head clusters

section.
  • DB Connect provides high availability on Splunk Enterprise with a Search Head Cluster, by executing input/output on the captain.
Essentially, this is saying that in Splunk this is normal "expected behavior" and can be treated as such."
 
To me this means that DB Inputs and DB Outputs on the Search Head Cluster will be limited by the hardware (CPU / Memory) of the captain, so you have to be careful with this.
The benefit of DB Connect on an SHC is the replication of identity and connection configs across the cluster members.  Rather than using DB Connect to configure and run the Input/output jobs, I recommend creating scheduled searches that run the dbxquery command.  This way, the query jobs are distributed by the captain to all of the members of the cluster.  I am testing this on my SHC with positive results!
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @aaronbarry73 ,

good for you, see next time!

Ciao and happy splunking

Giuseppe

P.S.: Karma Points are appreciated 😉

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @aaronbarry73,

In general, I don't like to run an input on a Search Head, I prefer to use a dedicated Heavy Forwarder.

Anyway, open a case to Splunk Support to better understand this behavior.

Ciao.

Giuseppe

aaronbarry73
Path Finder

Thanks, @gcusello , it looks like this might be the intended behavior, but seems odd that the captain would run all dbx jobs (unless the jobs are being delegated but the captain is doing all of the logging?) so I think I will open a case to get confirmation on expected behavior.

We have always used dedicated Heavy Forwarders, but I figured it would be nice to maintain all identities, connections and inputs in one place!  We'll see what Splunk says and I'll keep digging!

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @aaronbarry73 ,

let me know if I can help you more, or, please, accept one answer for the other people of Community.

Please, when you'll have the answer from Splunk Support, please share it for the other people of Community.

Ciao and happy splunking

Giuseppe

P.S.: Karma Points are appreciated 😉

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...