Hello,
I am facing an issue on my 3 member SHC whereby I have used the deployer to push a local folder with the deployer_push_mode=full. This ends up at <app>/local/inputs.conf on all 3 SHs.
This inputs.conf has settings which are independent to the SH it is on, eg. SSL cert name.
So for now, the inputs.conf is the same throughout the cluster, which is wrong as the certs are named differently on each SH.
The thing is, I only need this inputs.conf on only 1 SH which is receiving logs from Splunk UBA.
Splunk UBA is now forwarding logs to the SH1.
My question is, should the SH1 go down, how do i configure SH2 inputs.conf to point to its cert path instead of the configuration from deployer which is the path for SH1.
I'm not sure if I can setup each of the 3 SHs /opt/splunk/etc/<app>/local/inputs.conf differently as it will affect SHC raft issues?
First, the app configuration has no bearing on SHC raft. The cluster can choose a captain even if the members are not identical.
SHC members should not be running inputs. Put the inputs on a heavy forwarder, instead.
But wont directly editing the inputs.conf on a single shc member cause the cluster to not be in sync?
If all 3 SHs in my cluster are having different <app>/local/inputs.conf wont it cause issues when a captain is reelected or when deployer sends a new bundle to be replicated across the cluster?
What i'm trying to achieve is sending the Splunk UBA anomalies and threats to a Splunk ES.
I have a 3 SHC with ES on it, and UBA does not support integrating with multiple search heads.
Hi
as @richgalloway said, don't use SHx as getting data in with inputs.conf. Use HF or indexers.
If you really want to get future issues with this, then use etc/system/local, but remember this is not Splunk's or our suggestion to do it. Use those HFs to manage inputs on distributed environments!
r. Ismo