<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: splunkd died every day with the same error in Splunk Search</title>
    <link>https://community.splunk.com/t5/Splunk-Search/splunkd-died-every-day-with-the-same-error/m-p/93451#M24084</link>
    <description>&lt;P&gt;Your ulimits are not set correctly, or are using the system defaults.&lt;BR /&gt;
As a result, splunkd is likely using more memory than allowed or available, so the kernel kills the process in order to protect itself.&lt;/P&gt;</description>
    <pubDate>Fri, 17 May 2019 19:37:01 GMT</pubDate>
    <dc:creator>codebuilder</dc:creator>
    <dc:date>2019-05-17T19:37:01Z</dc:date>
    <item>
      <title>splunkd died every day with the same error</title>
      <link>https://community.splunk.com/t5/Splunk-Search/splunkd-died-every-day-with-the-same-error/m-p/93442#M24075</link>
      <description>&lt;P&gt;splunkd died every day with the same error &lt;BR /&gt;
FATAL ProcessRunner - Unexpected EOF from process runner child!&lt;BR /&gt;
ERROR ProcessRunner - helper process seems to have died (child killed by signal 9: Killed)!&lt;/P&gt;

&lt;P&gt;I can't see anything that may caused this... it does not last for 24 hours after restart...&lt;/P&gt;

&lt;P&gt;here's the partial log:&lt;BR /&gt;
04-13-2013 13:37:03.498 +0000 WARN  FilesystemChangeWatcher - error getting attributes of path "/home/c9logs/c9logs/edgdc2/sdi_slce28vmf6011/.zfs/snapshot/.auto-1365364800/config/m_domains/tasdc2_domain/servers/AdminServer/adr": Permission denied&lt;BR /&gt;
04-13-2013 13:37:03.499 +0000 WARN  FilesystemChangeWatcher - error getting attributes of path "/home/c9logs/c9logs/edgdc2/sdi_slce28vmf6011/.zfs/snapshot/.auto-1365364800/config/m_domains/tasdc2_domain/servers/AdminServer/sysman": Permission denied&lt;BR /&gt;
04-13-2013 13:38:37.102 +0000 FATAL ProcessRunner - Unexpected EOF from process runner child!&lt;BR /&gt;
04-13-2013 13:38:37.325 +0000 ERROR ProcessRunner - helper process seems to have died (child killed by signal 9: Killed)!&lt;/P&gt;</description>
      <pubDate>Mon, 28 Sep 2020 13:43:38 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/splunkd-died-every-day-with-the-same-error/m-p/93442#M24075</guid>
      <dc:creator>vincenty</dc:creator>
      <dc:date>2020-09-28T13:43:38Z</dc:date>
    </item>
    <item>
      <title>Re: splunkd died every day with the same error</title>
      <link>https://community.splunk.com/t5/Splunk-Search/splunkd-died-every-day-with-the-same-error/m-p/93443#M24076</link>
      <description>&lt;P&gt;Signal 9 is a KILL signal from an external process. It is likely that your OS has some kind of monitor or other setting on it that kills processes that do certain things. Perhaps your administrator is watching for memory usage, access to certain files, or other things. You should consult with your system admin to find out what they have put in place.&lt;/P&gt;</description>
      <pubDate>Mon, 15 Apr 2013 06:38:49 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/splunkd-died-every-day-with-the-same-error/m-p/93443#M24076</guid>
      <dc:creator>gkanapathy</dc:creator>
      <dc:date>2013-04-15T06:38:49Z</dc:date>
    </item>
    <item>
      <title>Re: splunkd died every day with the same error</title>
      <link>https://community.splunk.com/t5/Splunk-Search/splunkd-died-every-day-with-the-same-error/m-p/93444#M24077</link>
      <description>&lt;P&gt;Check syslog/dmesg to see if the kernel's oom_killer is getting invoked&lt;/P&gt;

&lt;P&gt;Out of memory: Kill process 7575 (splunkd) score 201 or sacrifice child&lt;BR /&gt;
Killed process 7576, UID 1000, (splunkd) total-vm:70232kB, anon-rss:392kB, file-rss:152kB&lt;/P&gt;</description>
      <pubDate>Thu, 31 Oct 2013 22:13:51 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/splunkd-died-every-day-with-the-same-error/m-p/93444#M24077</guid>
      <dc:creator>rvenkatesh25</dc:creator>
      <dc:date>2013-10-31T22:13:51Z</dc:date>
    </item>
    <item>
      <title>Re: splunkd died every day with the same error</title>
      <link>https://community.splunk.com/t5/Splunk-Search/splunkd-died-every-day-with-the-same-error/m-p/93445#M24078</link>
      <description>&lt;P&gt;Was this ever resolved?&lt;/P&gt;</description>
      <pubDate>Wed, 07 Jan 2015 04:47:46 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/splunkd-died-every-day-with-the-same-error/m-p/93445#M24078</guid>
      <dc:creator>rsolutions</dc:creator>
      <dc:date>2015-01-07T04:47:46Z</dc:date>
    </item>
    <item>
      <title>Re: splunkd died every day with the same error</title>
      <link>https://community.splunk.com/t5/Splunk-Search/splunkd-died-every-day-with-the-same-error/m-p/93446#M24079</link>
      <description>&lt;P&gt;My .02 is that this is memory related. I am having the same issue and a check on /var/log/messages shows: &lt;/P&gt;

&lt;P&gt;Apr 20 01:59:06 splog1 kernel: Out of memory: Kill process 45929 (splunkd) score 17 or sacrifice child&lt;BR /&gt;
Apr 20 01:59:06 splog1 kernel: Killed process 45934, UID 5000, (splunkd) total-vm:66104kB, anon-rss:1260kB, file-rss:4kB&lt;/P&gt;

&lt;P&gt;This was happening on a new instance of Enterprise 6.5.3. I traced it to an input source that was particulary large and hadn't been indexed for a while due to the upgrade. I had to restart splunkd a few times on the indexer and now it's running well.&lt;/P&gt;</description>
      <pubDate>Thu, 20 Apr 2017 15:33:49 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/splunkd-died-every-day-with-the-same-error/m-p/93446#M24079</guid>
      <dc:creator>mweissha</dc:creator>
      <dc:date>2017-04-20T15:33:49Z</dc:date>
    </item>
    <item>
      <title>Re: splunkd died every day with the same error</title>
      <link>https://community.splunk.com/t5/Splunk-Search/splunkd-died-every-day-with-the-same-error/m-p/93447#M24080</link>
      <description>&lt;P&gt;We had this problem with an infinite loop inside a macro (calling itself) even though we had [search] limits.conf set up on memory.&lt;/P&gt;</description>
      <pubDate>Tue, 05 Dec 2017 08:39:07 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/splunkd-died-every-day-with-the-same-error/m-p/93447#M24080</guid>
      <dc:creator>splunkreal</dc:creator>
      <dc:date>2017-12-05T08:39:07Z</dc:date>
    </item>
    <item>
      <title>Re: splunkd died every day with the same error</title>
      <link>https://community.splunk.com/t5/Splunk-Search/splunkd-died-every-day-with-the-same-error/m-p/93448#M24081</link>
      <description>&lt;P&gt;how did you find the macro causing issues and calling itslef. Will be helpful for me to validate the same&lt;/P&gt;</description>
      <pubDate>Wed, 15 May 2019 09:40:05 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/splunkd-died-every-day-with-the-same-error/m-p/93448#M24081</guid>
      <dc:creator>RishiMandal</dc:creator>
      <dc:date>2019-05-15T09:40:05Z</dc:date>
    </item>
    <item>
      <title>Re: splunkd died every day with the same error</title>
      <link>https://community.splunk.com/t5/Splunk-Search/splunkd-died-every-day-with-the-same-error/m-p/93449#M24082</link>
      <description>&lt;P&gt;Correlated with changes made that day&lt;/P&gt;</description>
      <pubDate>Wed, 15 May 2019 09:46:19 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/splunkd-died-every-day-with-the-same-error/m-p/93449#M24082</guid>
      <dc:creator>splunkreal</dc:creator>
      <dc:date>2019-05-15T09:46:19Z</dc:date>
    </item>
    <item>
      <title>Re: splunkd died every day with the same error</title>
      <link>https://community.splunk.com/t5/Splunk-Search/splunkd-died-every-day-with-the-same-error/m-p/93450#M24083</link>
      <description>&lt;P&gt;Did you get this resolved?&lt;BR /&gt;
Can you validate and confirm if splunk was getting killed post an active session is terminated, that is, as soon as some one logs out of your splunk session or server, and if it dies after that.&lt;/P&gt;</description>
      <pubDate>Fri, 17 May 2019 19:15:05 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/splunkd-died-every-day-with-the-same-error/m-p/93450#M24083</guid>
      <dc:creator>RishiMandal</dc:creator>
      <dc:date>2019-05-17T19:15:05Z</dc:date>
    </item>
    <item>
      <title>Re: splunkd died every day with the same error</title>
      <link>https://community.splunk.com/t5/Splunk-Search/splunkd-died-every-day-with-the-same-error/m-p/93451#M24084</link>
      <description>&lt;P&gt;Your ulimits are not set correctly, or are using the system defaults.&lt;BR /&gt;
As a result, splunkd is likely using more memory than allowed or available, so the kernel kills the process in order to protect itself.&lt;/P&gt;</description>
      <pubDate>Fri, 17 May 2019 19:37:01 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/splunkd-died-every-day-with-the-same-error/m-p/93451#M24084</guid>
      <dc:creator>codebuilder</dc:creator>
      <dc:date>2019-05-17T19:37:01Z</dc:date>
    </item>
  </channel>
</rss>

