<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Possible memory leak in 4.3.6 in Monitoring Splunk</title>
    <link>https://community.splunk.com/t5/Monitoring-Splunk/Possible-memory-leak-in-4-3-6/m-p/85618#M1056</link>
    <description>&lt;P&gt;Answering my own question, showing all the steps I have done.&lt;/P&gt;

&lt;UL&gt;
&lt;LI&gt;&lt;P&gt;Upgrading volume for hot/warm backups from 250 IOPS to 1200 IOPS, didn't sort the memore usage patterns. But anyway high iops volumes are good things&lt;/P&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;P&gt;On Search heads and indexers I had unix app that produced some errors, but it didn't make any problems before so I didn't look at it at that time. When I removed unix app ( and other defaults one ) it helped a bit, but after couple of minutes memory was starting to go up.&lt;/P&gt;

&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;upgraded from 4.3.6 to 5.0.3. The process was straightforward, dpkg -i the package. No more memory leaks. Memory stays at 2-4%.&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;&lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;&lt;IMG src="http://i.imgur.com/OAvY2oG.png" alt="alt text" /&gt;&lt;/P&gt;</description>
    <pubDate>Sat, 06 Jul 2013 08:26:46 GMT</pubDate>
    <dc:creator>jakubincloud</dc:creator>
    <dc:date>2013-07-06T08:26:46Z</dc:date>
    <item>
      <title>Possible memory leak in 4.3.6</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Possible-memory-leak-in-4-3-6/m-p/85615#M1053</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;

&lt;P&gt;I have an environment with 2 search heads and 2 indexers.  There are 70ish forwarders which send around 50 MB data a  day. &lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;lsof -i :port | wc -l # shows established connections
70
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;On one search head there are 6 realtime searches, which can be seen on 'ps' screen&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;ps -Lef
(...) splunkd search --id=rt_1373011410.1218 --maxbuckets=0
(...) splunkd search --id=rt_1373011410.1218 --maxbuckets=0
(...) splunkd search --id=rt_1373011410.1219 --maxbuckets=0
(...) splunkd search --id=rt_1373011410.1219 --maxbuckets=0
(...) splunkd search --id=rt_1373011410.1218 --maxbuckets=0
(...) splunkd search --id=rt_1373011410.1219 --maxbuckets=0
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;However I see increasing number of splunkd threads, now sitting at number 39&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;ps -Lef | grep -v grep | grep "splunkd -p 8089" | wc -l
39
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Furthermore there are couple of threads for mrsparkle&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;python -O /opt/splunk/lib/python2.7/site-packages/splunk/appserver/mrsparkle/root.py restart
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;The problem is that Splunk starts using the whole memory. Mem Used Percentage graph can be seen here&lt;/P&gt;

&lt;P&gt;&lt;IMG src="http://i.imgur.com/uDQYxFJ.png" alt="alt text" /&gt;&lt;/P&gt;

&lt;P&gt;( edit: For your information Indexers have 34 GB memory each )&lt;/P&gt;

&lt;P&gt;You can see manual restarts, and forced ones when memory usage gets to 100% and splunk is killed because of oom.&lt;/P&gt;

&lt;P&gt;All splunk instances have been updated to 4.3.6 and have Deployment Monitor App disabled.&lt;/P&gt;

&lt;P&gt;Is there something else I can do to check what causes the memory leak ?&lt;/P&gt;</description>
      <pubDate>Fri, 05 Jul 2013 08:57:46 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Possible-memory-leak-in-4-3-6/m-p/85615#M1053</guid>
      <dc:creator>jakubincloud</dc:creator>
      <dc:date>2013-07-05T08:57:46Z</dc:date>
    </item>
    <item>
      <title>Re: Possible memory leak in 4.3.6</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Possible-memory-leak-in-4-3-6/m-p/85616#M1054</link>
      <description>&lt;P&gt;You memory usage patterns seem weird to me because you are processing virtually no data. I have over 50Gb coming in each day and only have 8Gb memory and it doesn't run out of memory.&lt;/P&gt;

&lt;P&gt;Unless you are doing some massive complex processing on the inbound information it isn't normal to run out of memory with only 50Mb per day. I would say there is some sort of loop behaviour going on in your splunk infra-structure, but without knowing how things are setup and what you are doing with the data, it is quite hard to give you good guidance.&lt;/P&gt;</description>
      <pubDate>Fri, 05 Jul 2013 12:28:50 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Possible-memory-leak-in-4-3-6/m-p/85616#M1054</guid>
      <dc:creator>krugger</dc:creator>
      <dc:date>2013-07-05T12:28:50Z</dc:date>
    </item>
    <item>
      <title>Re: Possible memory leak in 4.3.6</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Possible-memory-leak-in-4-3-6/m-p/85617#M1055</link>
      <description>&lt;P&gt;Thank you for your answer.  Upgrading from 4.3.6 to 5.0.3 solved the problem&lt;/P&gt;</description>
      <pubDate>Sat, 06 Jul 2013 08:13:32 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Possible-memory-leak-in-4-3-6/m-p/85617#M1055</guid>
      <dc:creator>jakubincloud</dc:creator>
      <dc:date>2013-07-06T08:13:32Z</dc:date>
    </item>
    <item>
      <title>Re: Possible memory leak in 4.3.6</title>
      <link>https://community.splunk.com/t5/Monitoring-Splunk/Possible-memory-leak-in-4-3-6/m-p/85618#M1056</link>
      <description>&lt;P&gt;Answering my own question, showing all the steps I have done.&lt;/P&gt;

&lt;UL&gt;
&lt;LI&gt;&lt;P&gt;Upgrading volume for hot/warm backups from 250 IOPS to 1200 IOPS, didn't sort the memore usage patterns. But anyway high iops volumes are good things&lt;/P&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;P&gt;On Search heads and indexers I had unix app that produced some errors, but it didn't make any problems before so I didn't look at it at that time. When I removed unix app ( and other defaults one ) it helped a bit, but after couple of minutes memory was starting to go up.&lt;/P&gt;

&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;upgraded from 4.3.6 to 5.0.3. The process was straightforward, dpkg -i the package. No more memory leaks. Memory stays at 2-4%.&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;&lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;&lt;IMG src="http://i.imgur.com/OAvY2oG.png" alt="alt text" /&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 06 Jul 2013 08:26:46 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Monitoring-Splunk/Possible-memory-leak-in-4-3-6/m-p/85618#M1056</guid>
      <dc:creator>jakubincloud</dc:creator>
      <dc:date>2013-07-06T08:26:46Z</dc:date>
    </item>
  </channel>
</rss>

