<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Why is our Splunk server crashing with error &amp;quot;Received fatal signal 11 (Segmentation fault)...No memory mapped&amp;quot;? in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/Why-is-our-Splunk-server-crashing-with-error-quot-Received-fatal/m-p/216200#M42568</link>
    <description>&lt;P&gt;OS : Centos 6.7&lt;BR /&gt;
Splunk Version : 6.3.2&lt;/P&gt;

&lt;P&gt;For a few months our Splunk server keeps on crashing every 15 minutes or so&lt;BR /&gt;
When verifying the splunkd logs, here are the details of what I saw:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;Received fatal signal 11 (Segmentation fault).
 Cause:
   No memory mapped at address [0x00000054].
 Crashing thread: IndexerTPoolWorker-1
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Any clue as to why this is happening?&lt;/P&gt;</description>
    <pubDate>Wed, 06 Jan 2016 21:33:32 GMT</pubDate>
    <dc:creator>laquerre007</dc:creator>
    <dc:date>2016-01-06T21:33:32Z</dc:date>
    <item>
      <title>Why is our Splunk server crashing with error "Received fatal signal 11 (Segmentation fault)...No memory mapped"?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-is-our-Splunk-server-crashing-with-error-quot-Received-fatal/m-p/216200#M42568</link>
      <description>&lt;P&gt;OS : Centos 6.7&lt;BR /&gt;
Splunk Version : 6.3.2&lt;/P&gt;

&lt;P&gt;For a few months our Splunk server keeps on crashing every 15 minutes or so&lt;BR /&gt;
When verifying the splunkd logs, here are the details of what I saw:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;Received fatal signal 11 (Segmentation fault).
 Cause:
   No memory mapped at address [0x00000054].
 Crashing thread: IndexerTPoolWorker-1
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Any clue as to why this is happening?&lt;/P&gt;</description>
      <pubDate>Wed, 06 Jan 2016 21:33:32 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-is-our-Splunk-server-crashing-with-error-quot-Received-fatal/m-p/216200#M42568</guid>
      <dc:creator>laquerre007</dc:creator>
      <dc:date>2016-01-06T21:33:32Z</dc:date>
    </item>
    <item>
      <title>Re: Why is our Splunk server crashing with error "Received fatal signal 11 (Segmentation fault)...No memory mapped"?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-is-our-Splunk-server-crashing-with-error-quot-Received-fatal/m-p/216201#M42569</link>
      <description>&lt;P&gt;Hi, &lt;BR /&gt;
We have been facing the exact same issue.  Interestingly enough, we were able to replicate the issue by simply opening up a dashboard and separated the search head and indexer to figure out where the problem was.  Search Head was crashing with the existing configuration.&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;Short story:&lt;/STRONG&gt;&lt;BR /&gt;
We found a savedsearch within a user's context (private) that was named as a single character "a".  Once this saved search was renamed to something longer, the problem went away.&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;$SPLUNK_HOME/etc/users/mary/search/local/savedsearches.conf
[a]
...
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;rename the search name to be something longer:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[some_longer_name_a]
...
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;For this we had to edit the file, you can not do this from the web interface.&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;Long story:&lt;/STRONG&gt;&lt;BR /&gt;
The problem occurred when one of the available dashboards opened (or tried to open the link).  This also happened when we create a very simple dashboard with one simple search panel.  We were not able to replicate it with concurrent searches so this very much seemed like an issue with web instance.&lt;/P&gt;

&lt;P&gt;Splunk crashed within the same place all the time and the issue was replicated easily.  Here's a portion of the crash log:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[build aaff59bb082c] 2016-01-29 21:18:31
Received fatal signal 11 (Segmentation fault).
Cause:
   No memory mapped at address [0x0000000000000008].
Crashing thread: TcpChannelThread
Registers:
    RIP:  [0x0000000000DA1D78] _ZNK9Paginator3cmpEP10ConfigItemS1_m + 104 (splunkd)
...
    OLDMASK:  [0x0000000000000000]
OS: Linux
Arch: x86-64
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;On a brand new search head, we added apps (&lt;CODE&gt;$SPLUNK_HOME/etc/apps&lt;/CODE&gt;) and local config (&lt;CODE&gt;$SPLUNK_HOME/etc/system/local&lt;/CODE&gt;) and user config (&lt;CODE&gt;$SPLUNK_HOME/etc/users&lt;/CODE&gt;) one by one to figure out where the problem may be.&lt;/P&gt;

&lt;P&gt;It boiled down to one specific user configuration, say "mary" (&lt;CODE&gt;$SPLUNK_HOME/etc/users/mary&lt;/CODE&gt;).  So we one by one removed existing configuration for that user: dashboards, panels, and configuration files and tested the search head crash (opening up a dashboard).  It turned out to be the savedsearches.conf file as mentioned in the short version of this story above.&lt;/P&gt;

&lt;P&gt;The other interesting finding is that when we logon as "mary" and open up this private dashboard, nothing bad happens, no crashes.&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;Conclusion:&lt;/STRONG&gt;&lt;BR /&gt;
There's a ticket opened up and we still do not have a fix for this issue yet, but we were able to find out that some users were not following the naming conventions &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 01 Feb 2016 07:38:10 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-is-our-Splunk-server-crashing-with-error-quot-Received-fatal/m-p/216201#M42569</guid>
      <dc:creator>selim</dc:creator>
      <dc:date>2016-02-01T07:38:10Z</dc:date>
    </item>
    <item>
      <title>Re: Why is our Splunk server crashing with error "Received fatal signal 11 (Segmentation fault)...No memory mapped"?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-is-our-Splunk-server-crashing-with-error-quot-Received-fatal/m-p/216202#M42570</link>
      <description>&lt;P&gt;I managed to work around this by un-taring the current version of Splunk over the top of the installation.&lt;BR /&gt;
Running a chown command to make sure the files were all owned by the right user, then starting up again.&lt;BR /&gt;
Worked for me, hope this can help someone else.&lt;/P&gt;</description>
      <pubDate>Thu, 06 Oct 2016 00:03:56 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-is-our-Splunk-server-crashing-with-error-quot-Received-fatal/m-p/216202#M42570</guid>
      <dc:creator>mrgibbon</dc:creator>
      <dc:date>2016-10-06T00:03:56Z</dc:date>
    </item>
    <item>
      <title>Re: Why is our Splunk server crashing with error "Received fatal signal 11 (Segmentation fault)...No memory mapped&amp;</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/Why-is-our-Splunk-server-crashing-with-error-quot-Received-fatal/m-p/590417#M103438</link>
      <description>&lt;P&gt;Any solution for the above issue? I have the same one in Splunk version 8.1.6&lt;/P&gt;</description>
      <pubDate>Wed, 23 Mar 2022 12:37:47 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/Why-is-our-Splunk-server-crashing-with-error-quot-Received-fatal/m-p/590417#M103438</guid>
      <dc:creator>Janssen135</dc:creator>
      <dc:date>2022-03-23T12:37:47Z</dc:date>
    </item>
  </channel>
</rss>

