Monitoring Splunk

SHC performance issue

chaitali_1994
Engager

Hi ,

We have 3 search heads in our search head clustered environment,
first we saw Raft issues in the captain, then we had referred this document : https://docs.splunk.com/Documentation/Splunk/7.2.3/DistSearch/Handleraftissues even after that we are facing same issues like the captain is going down first and later on the members one by one.
We have also found one more error "child killed by signal 9"
Also there is some false error messages appearing in DMC regarding the shc is down, bring it back online ASAP to avoid service disruption.
Can anyone please help with resolving these issues as the search heads are going down abruptly?

Thanks in Advance!

Labels (2)
0 Karma

codebuilder
SplunkTrust
SplunkTrust

You need to increase memory on your SHC nodes, increase ulimit settings, or reduce memory consumption by Splunk.

When a process is killed by signal 9 that indicates that the kernel is protecting itself by killing processes that are consuming or reserving more memory than is available or allowed.

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma

DavidHourani
Super Champion

seems like a crash: child killed by signal 9 you'll need to reach out for support as this could be due to a bug. You're on 7.2.3 ?

0 Karma

jnudell_2
Builder

SHC is a pretty complicated setup, and it's very finicky. If you're not familiar with operating and maintaining a SHC, it might be best to get Splunk Professional Services involved, or open a support ticket for your issue.

skalliger
SplunkTrust
SplunkTrust

Now that is not easy to troubleshoot without more information about your environment.

  • What are the specs of the SHs?
  • Are you running a lot of schedules searches? What's happening before they go down? Please take a look into the Monitoring Console (I suggest using it, it gives very good insight especially for troubleshooting issues) and analyze the SHC dashboards provided there.
  • what was the reason the processes got killed? Memory problems maybe?

Skalli

0 Karma
Get Updates on the Splunk Community!

Routing Data to Different Splunk Indexes in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. The OpenTelemetry project is the second largest ...

Getting Started with AIOps: Event Correlation Basics and Alert Storm Detection in ...

Getting Started with AIOps:Event Correlation Basics and Alert Storm Detection in Splunk IT Service ...

Register to Attend BSides SPL 2022 - It's all Happening October 18!

Join like-minded individuals for technical sessions on everything Splunk!  This is a community-led and run ...