<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Why is splunkd mothership daemon on a standalone search head being killed by OOM killer? in Monitoring Splunk</title>
    <link>https://community.splunk.com/t5/Monitoring-Splunk/Why-is-splunkd-mothership-daemon-on-a-standalone-search-head/m-p/389937#M3345</link>
    <description>&lt;P&gt;It turns out to have been caused by debug option, debug_metrics=true - the symptoms disappeared after debug_metrics reverted back to  false.&lt;BR /&gt;
'debug_metrics=true' expands the amount of perf data in info.csv collected by the search process because it breaks down the perf data by indexers - have hundreds of indexers in multiple clusters. The aggregation does not occur and the data for each indexer is preserved. The perf data becomes hundred times greater and the search head loads data for analysis.&lt;BR /&gt;
Until any fix is available please monitor the memory usage of splunkd whenever you use 'debug_metrics = true' in limits.conf.&lt;/P&gt;</description>
    <pubDate>Tue, 29 Sep 2020 23:56:41 GMT</pubDate>
    <dc:creator>sylim_splunk</dc:creator>
    <dc:date>2020-09-29T23:56:41Z</dc:date>
    <item>
      <title>Why is splunkd mothership daemon on a standalone search head being killed by OOM killer?</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Why-is-splunkd-mothership-daemon-on-a-standalone-search-head/m-p/389936#M3344</link>
      <description>&lt;P&gt;Enterprise Security search head stopped by OOM Killer twice today.  The graph attached shows memory spikes and OOM killer stops the splunkd with the kernel messages like this;&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;*Mar 28 00:29:38 splunk-es kernel: splunkd invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Mar 28 00:29:44 splunk-es kernel: [&amp;lt; ffffffff853ba4e4 &amp;gt;] oom_kill_process+0x254/0x3d0

Mar 28 00:29:44 splunk-es kernel: [&amp;lt; ffffffff853b9f8d &amp;gt;] ? oom_unkillable_task+0xcd/0x120
Mar 28 00:29:45 splunk-es kernel: [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name
Mar 28 00:29:45 splunk-es kernel: splunkd invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0*
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;This just started today in the middle of troubleshooting on some search issues.&lt;/P&gt;</description>
      <pubDate>Wed, 03 Apr 2019 20:03:03 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Why-is-splunkd-mothership-daemon-on-a-standalone-search-head/m-p/389936#M3344</guid>
      <dc:creator>sylim_splunk</dc:creator>
      <dc:date>2019-04-03T20:03:03Z</dc:date>
    </item>
    <item>
      <title>Re: Why is splunkd mothership daemon on a standalone search head being killed by OOM killer?</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Why-is-splunkd-mothership-daemon-on-a-standalone-search-head/m-p/389937#M3345</link>
      <description>&lt;P&gt;It turns out to have been caused by debug option, debug_metrics=true - the symptoms disappeared after debug_metrics reverted back to  false.&lt;BR /&gt;
'debug_metrics=true' expands the amount of perf data in info.csv collected by the search process because it breaks down the perf data by indexers - have hundreds of indexers in multiple clusters. The aggregation does not occur and the data for each indexer is preserved. The perf data becomes hundred times greater and the search head loads data for analysis.&lt;BR /&gt;
Until any fix is available please monitor the memory usage of splunkd whenever you use 'debug_metrics = true' in limits.conf.&lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 23:56:41 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Why-is-splunkd-mothership-daemon-on-a-standalone-search-head/m-p/389937#M3345</guid>
      <dc:creator>sylim_splunk</dc:creator>
      <dc:date>2020-09-29T23:56:41Z</dc:date>
    </item>
  </channel>
</rss>

