<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: SHC performance issue in Monitoring Splunk</title>
    <link>https://community.splunk.com/t5/Monitoring-Splunk/SHC-performance-issue/m-p/449401#M3781</link>
    <description>&lt;P&gt;SHC is a pretty complicated setup, and it's very finicky.  If you're not familiar with operating and maintaining a SHC, it might be best to get Splunk Professional Services involved, or open a support ticket for your issue.&lt;/P&gt;</description>
    <pubDate>Wed, 26 Jun 2019 19:50:02 GMT</pubDate>
    <dc:creator>jnudell_2</dc:creator>
    <dc:date>2019-06-26T19:50:02Z</dc:date>
    <item>
      <title>SHC performance issue</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/SHC-performance-issue/m-p/449399#M3779</link>
      <description>&lt;P&gt;Hi ,&lt;/P&gt;
&lt;P&gt;We have 3 search heads in our search head clustered environment, &lt;BR /&gt;first we saw Raft issues in the captain, then we had referred this document : &lt;A href="https://docs.splunk.com/Documentation/Splunk/7.2.3/DistSearch/Handleraftissues" target="_blank"&gt;https://docs.splunk.com/Documentation/Splunk/7.2.3/DistSearch/Handleraftissues&lt;/A&gt; even after that we are facing same issues like the captain is going down first and later on the members one by one. &lt;BR /&gt;We have also found one more error "child killed by signal 9" &lt;BR /&gt;Also there is some false error messages appearing in DMC regarding the shc is down, bring it back online ASAP to avoid service disruption.&lt;BR /&gt;Can anyone please help with resolving these issues as the search heads are going down abruptly?&lt;/P&gt;
&lt;P&gt;Thanks in Advance!&lt;/P&gt;</description>
      <pubDate>Sat, 06 Jun 2020 01:45:56 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/SHC-performance-issue/m-p/449399#M3779</guid>
      <dc:creator>chaitali_1994</dc:creator>
      <dc:date>2020-06-06T01:45:56Z</dc:date>
    </item>
    <item>
      <title>Re: SHC performance issue</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/SHC-performance-issue/m-p/449400#M3780</link>
      <description>&lt;P&gt;Now that is not easy to troubleshoot without more information about your environment.&lt;/P&gt;

&lt;UL&gt;
&lt;LI&gt;What are the specs of the SHs?&lt;/LI&gt;
&lt;LI&gt;Are you running a lot of schedules searches? What's happening before they go down? Please take a look into the Monitoring Console (I suggest using it, it gives very good insight especially for troubleshooting issues) and analyze the SHC dashboards provided there. &lt;/LI&gt;
&lt;LI&gt;what was the reason the processes got killed? Memory problems maybe? &lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;Skalli&lt;/P&gt;</description>
      <pubDate>Wed, 26 Jun 2019 19:33:33 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/SHC-performance-issue/m-p/449400#M3780</guid>
      <dc:creator>skalliger</dc:creator>
      <dc:date>2019-06-26T19:33:33Z</dc:date>
    </item>
    <item>
      <title>Re: SHC performance issue</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/SHC-performance-issue/m-p/449401#M3781</link>
      <description>&lt;P&gt;SHC is a pretty complicated setup, and it's very finicky.  If you're not familiar with operating and maintaining a SHC, it might be best to get Splunk Professional Services involved, or open a support ticket for your issue.&lt;/P&gt;</description>
      <pubDate>Wed, 26 Jun 2019 19:50:02 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/SHC-performance-issue/m-p/449401#M3781</guid>
      <dc:creator>jnudell_2</dc:creator>
      <dc:date>2019-06-26T19:50:02Z</dc:date>
    </item>
    <item>
      <title>Re: SHC performance issue</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/SHC-performance-issue/m-p/449402#M3782</link>
      <description>&lt;P&gt;seems like a crash:    &lt;CODE&gt;child killed by signal 9&lt;/CODE&gt; you'll need to reach out for support as this could be due to a bug. You're on 7.2.3 ?&lt;/P&gt;</description>
      <pubDate>Thu, 27 Jun 2019 09:41:20 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/SHC-performance-issue/m-p/449402#M3782</guid>
      <dc:creator>DavidHourani</dc:creator>
      <dc:date>2019-06-27T09:41:20Z</dc:date>
    </item>
    <item>
      <title>Re: SHC performance issue</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/SHC-performance-issue/m-p/449403#M3783</link>
      <description>&lt;P&gt;You need to increase memory on your SHC nodes, increase ulimit settings, or reduce memory consumption by Splunk.&lt;/P&gt;

&lt;P&gt;When a process is killed by signal 9 that indicates that the kernel is protecting itself by killing processes that are consuming or reserving more memory than is available or allowed.&lt;/P&gt;</description>
      <pubDate>Thu, 27 Jun 2019 16:14:55 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/SHC-performance-issue/m-p/449403#M3783</guid>
      <dc:creator>codebuilder</dc:creator>
      <dc:date>2019-06-27T16:14:55Z</dc:date>
    </item>
  </channel>
</rss>

