<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Why does search keep getting killed by 'Signal 9'? in Splunk Search</title>
    <link>https://community.splunk.com/t5/Splunk-Search/Why-does-search-keep-getting-killed-by-Signal-9/m-p/140545#M184840</link>
    <description>&lt;P&gt;For a certain search I keep getting the following error:&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Search process did not exit cleanly, exit_code=0, description="killed by signal 9: Killed". Please look in search.log for this peer in the Job Inspector for more info.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Now the search.log doesn't show much useful information except that the job is indeed killed. Searching a bit on the error learned me that 'Signal 9' means that the job is killed 'externally', probably due to memory issues.&lt;/P&gt;
&lt;P&gt;Strange thing is that this is only happening for this specific query, by far not the heaviest or most complex one running and that I never had any memory issues before. Also, the query works like a charm when the timespan is several days, but setting the timespan on 1 month or more gives me this error.&lt;/P&gt;
&lt;P&gt;Anyone?&lt;/P&gt;</description>
    <pubDate>Wed, 14 Dec 2022 07:31:51 GMT</pubDate>
    <dc:creator>dkoops</dc:creator>
    <dc:date>2022-12-14T07:31:51Z</dc:date>
    <item>
      <title>Why does search keep getting killed by 'Signal 9'?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Why-does-search-keep-getting-killed-by-Signal-9/m-p/140545#M184840</link>
      <description>&lt;P&gt;For a certain search I keep getting the following error:&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Search process did not exit cleanly, exit_code=0, description="killed by signal 9: Killed". Please look in search.log for this peer in the Job Inspector for more info.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Now the search.log doesn't show much useful information except that the job is indeed killed. Searching a bit on the error learned me that 'Signal 9' means that the job is killed 'externally', probably due to memory issues.&lt;/P&gt;
&lt;P&gt;Strange thing is that this is only happening for this specific query, by far not the heaviest or most complex one running and that I never had any memory issues before. Also, the query works like a charm when the timespan is several days, but setting the timespan on 1 month or more gives me this error.&lt;/P&gt;
&lt;P&gt;Anyone?&lt;/P&gt;</description>
      <pubDate>Wed, 14 Dec 2022 07:31:51 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Why-does-search-keep-getting-killed-by-Signal-9/m-p/140545#M184840</guid>
      <dc:creator>dkoops</dc:creator>
      <dc:date>2022-12-14T07:31:51Z</dc:date>
    </item>
    <item>
      <title>Re: My search keeps getting killed by 'Signal 9'</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Why-does-search-keep-getting-killed-by-Signal-9/m-p/140546#M184841</link>
      <description>&lt;P&gt;What is the search? If it includes a &lt;CODE&gt;transaction&lt;/CODE&gt; command, then you may very well be running out of memory.&lt;/P&gt;</description>
      <pubDate>Sun, 12 Apr 2015 17:24:13 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Why-does-search-keep-getting-killed-by-Signal-9/m-p/140546#M184841</guid>
      <dc:creator>lguinn2</dc:creator>
      <dc:date>2015-04-12T17:24:13Z</dc:date>
    </item>
    <item>
      <title>Re: My search keeps getting killed by 'Signal 9'</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Why-does-search-keep-getting-killed-by-Signal-9/m-p/140547#M184842</link>
      <description>&lt;P&gt;Thank you for the reply. The search doesn't contain memory intensive stuff like transactions. It is only used to calculate some variables imported in another search that does a simple regression on all volumes:&lt;/P&gt;

&lt;P&gt;&lt;EM&gt;source='source1' OR source='source2' earliest=@d-31d latest=@d&lt;BR /&gt;
| fields _time 'volume-id' name 'storage_used_percent'&lt;BR /&gt;
| eval name=if(source=="source1", 'volume-id', name)&lt;BR /&gt;
| stats count as count, sum(storage_used_percent) as Y sum(_time) as X, sum(eval(_time*storage_used_percent)) as XY, sum(eval(_time&lt;/EM&gt;_time)) as X2 latest(_time) as T1 earliest(_time) as T0 by name*&lt;/P&gt;

&lt;P&gt;While the environment is quite big (&amp;gt;6000 volumes), even larger searches over a longer timespan work just fine. Also, after testing a bit, if I set the timerange to up to 26 days -&amp;gt; now, it works. A period longer than 26 days -&amp;gt; now, gives the signal 9 error.. &lt;/P&gt;</description>
      <pubDate>Mon, 28 Sep 2020 19:27:57 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Why-does-search-keep-getting-killed-by-Signal-9/m-p/140547#M184842</guid>
      <dc:creator>dkoops</dc:creator>
      <dc:date>2020-09-28T19:27:57Z</dc:date>
    </item>
    <item>
      <title>Re: Why does search keep getting killed by 'Signal 9'?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Why-does-search-keep-getting-killed-by-Signal-9/m-p/624186#M217007</link>
      <description>&lt;P&gt;Very old post - i dont think we will get any answer here…&lt;/P&gt;</description>
      <pubDate>Thu, 28 Nov 2024 17:21:02 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Why-does-search-keep-getting-killed-by-Signal-9/m-p/624186#M217007</guid>
      <dc:creator>iamsahil</dc:creator>
      <dc:date>2024-11-28T17:21:02Z</dc:date>
    </item>
    <item>
      <title>Re: Why does search keep getting killed by 'Signal 9'?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Why-does-search-keep-getting-killed-by-Signal-9/m-p/674566#M230894</link>
      <description>&lt;P&gt;This post is a few years old but to aid those who have spent hours trying to find the answers and end up here for help.... In my case I saw the "killed by signal 9" because of the proxy configuration...&amp;nbsp; I had 2 different cases of 'signal9'.&amp;nbsp;&lt;/P&gt;&lt;P&gt;The solution for one case was the application itself allowed the proxy to be set in the app settings and in the other case this was the solution&amp;nbsp;&lt;A href="https://community.splunk.com/t5/All-Apps-and-Add-ons/Can-you-configure-the-Duo-Splunk-Connector-to-use-a-web-proxy/m-p/486022" target="_blank"&gt;Can you configure the Duo Splunk Connector to use ... - Splunk Community.&amp;nbsp;&amp;nbsp;&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 17 Jan 2024 19:07:39 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Why-does-search-keep-getting-killed-by-Signal-9/m-p/674566#M230894</guid>
      <dc:creator>flynegal</dc:creator>
      <dc:date>2024-01-17T19:07:39Z</dc:date>
    </item>
  </channel>
</rss>

