Hi Folks,
I have a SHC 3 members with splunk ES, currently when the ES trigger a notable, the notable trigger 3 times the throttling is correctly configured.
By my opinion the SHC out of sync do you have any suggestions?
Regards
I have solved this issue.
to get the notables accross SHC, you need to send notable data to an index in indexer cluster
using outputs.conf
once data is sent, new notables will be available in all SHs
In a well-deployed environment you should _not_ index anything locally except for the indexing tier. (and except for DS-es in the recent versions). You should send all your events to the indexer tier.
So now the issue is, Some alarms triggered in 1 sh and others trigger in 2nd sh
shcluster status is up.
If notable should trigger on only on and correlation searches only run on 1 search head, what is the point of having a shcluster.
Also what will happen of reports that use notable data.
How will I control searches to be run on only 1 sh.
In shcluster the scheduler distributes scheduled searches among shcluster members so that if you have 3 SHs 32 CPUs each, you have effectively 96 CPUs to distribute searches among.
But a single search is run on a single SH and its results are replicated to other members.
Also show shcluster-status shows way more information than just "up".
Did you find solution to this?
my problem is that it will trigger on all shc members and when i assign notable from on sh it is not reflected on other shs
Yes. It does look as if the SHC members weren't properly communicating with one another. What is interesting though is that the captain is responsible for scheduling searches. So if you had connectivity problems you should also have problems with captain election. But your behaviour suggests that each cluster node works independently,
What does your "splunk show shcluster status" say on each node?
Hi @Nawab ,
the correct action is that the Correlation Search is runned on only one of the SHs and only one Notable is created.
If more than one Notable is created, means that the Cluster is out of sync, as @aasabatini said.
In this case, you have to check the sync and restart the members and eventually rebuild the configurations.
For more infos see at https://docs.splunk.com/Documentation/Splunk/9.3.1/DistSearch/SHCdeploymentoverview
Ciao.
Giuseppe