Hi all,
I built a dedicated Search Head Cluster with 3 members and a deployer to load and test how DB Connect works in a shcluster. Splunk Enterprise 9.1.2 and DB Connect 3.15.1. The configs replicate fine across the members and I am running several inputs. It appears that all of the inputs so far are running on the captain only. I am wondering if this is normal behavior, and if the captain will start distributing input jobs to other members once it is maxed out?
I am running this search to see the input jobs:
index=_internal sourcetype=dbx_job_metrics connection=* host IN (abclx1001,abclx1002,abclx1003)
| table _time host connection input_name db_read_time status start_time end_time duration read_count write_count error_count
| sort - _time
All inputs are successful, and the host field is always the same - it is the captain.
The other members give me messages like this:
2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 74 : Run DB Input name=test_db_input took 0.045278310775756836 s
2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 66 : Input was run on other node status=303 content=b'Ignoring input request as other node is the captain'
2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 51 : Run DB Input name=test_db_input
127.0.0.1 - - [11/Dec/2023:23:40:00 +0000] "POST /api/inputs/test_db_input/run HTTP/1.1" 303 51 "-" "python-requests/2.25.0" 41
2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 45 : action=send_run_input_request
2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 74 : Run DB Input name=test_db_input took 0.04212641716003418 s
2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 66 : Input was run on other node status=303 content=b'Ignoring input request as other node is the captain'
127.0.0.1 - - [11/Dec/2023:23:40:00 +0000] "POST /api/inputs/test_db_input/run HTTP/1.1" 303 51 "-" "python-requests/2.25.0" 38
2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 51 : Run DB Input name=test_db_input
2023-12-11T17:40:00-0600 [INFO] [dbx_db_input.py], line 45 : action=send_run_input_request
Thoughts? Is the shc supposed to distributed these inputs the way it would distribute scheduled searches?
I opened a case with Splunk and they reviewed and replied that all of the DB Inputs running on the SHC Captain is expected behavior. Here's an excerpt from their findings:
"In review and consultation with other colleagues I believe I may have found an answer. It is located in the documentation:
I opened a case with Splunk and they reviewed and replied that all of the DB Inputs running on the SHC Captain is expected behavior. Here's an excerpt from their findings:
"In review and consultation with other colleagues I believe I may have found an answer. It is located in the documentation:
Hi @aaronbarry73 ,
good for you, see next time!
Ciao and happy splunking
Giuseppe
P.S.: Karma Points are appreciated 😉
Hi @aaronbarry73,
In general, I don't like to run an input on a Search Head, I prefer to use a dedicated Heavy Forwarder.
Anyway, open a case to Splunk Support to better understand this behavior.
Ciao.
Giuseppe
Thanks, @gcusello , it looks like this might be the intended behavior, but seems odd that the captain would run all dbx jobs (unless the jobs are being delegated but the captain is doing all of the logging?) so I think I will open a case to get confirmation on expected behavior.
We have always used dedicated Heavy Forwarders, but I figured it would be nice to maintain all identities, connections and inputs in one place! We'll see what Splunk says and I'll keep digging!
Hi @aaronbarry73 ,
let me know if I can help you more, or, please, accept one answer for the other people of Community.
Please, when you'll have the answer from Splunk Support, please share it for the other people of Community.
Ciao and happy splunking
Giuseppe
P.S.: Karma Points are appreciated 😉