<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic 1 out of 2 indexer has high RAM utilisation in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/1-out-of-2-indexer-has-high-RAM-utilisation/m-p/390437#M69857</link>
    <description>&lt;P&gt;After out upgrade from 6.5 to 7.2 1 of 2 indexers has high ram utilisation. We are running Enterprise Security too.&lt;/P&gt;

&lt;P&gt;Health Status from the search head is showing a yellow for splunkd  - data forwarding (I assume to that indexer?)&lt;/P&gt;

&lt;P&gt;Health status on that indexer is showing a Red  for buckets.&lt;/P&gt;

&lt;P&gt;The percentage of small of buckets created (60) over the last hour is very high and exceeded the red thresholds (50) for index=app_logs, and possibly more indexes, on this indexer&lt;/P&gt;

&lt;P&gt;So I'm not sure why its creating lots of small buckets - is this related to how we setup inputs?&lt;/P&gt;</description>
    <pubDate>Mon, 12 Nov 2018 02:43:33 GMT</pubDate>
    <dc:creator>mjm295</dc:creator>
    <dc:date>2018-11-12T02:43:33Z</dc:date>
    <item>
      <title>1 out of 2 indexer has high RAM utilisation</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/1-out-of-2-indexer-has-high-RAM-utilisation/m-p/390437#M69857</link>
      <description>&lt;P&gt;After out upgrade from 6.5 to 7.2 1 of 2 indexers has high ram utilisation. We are running Enterprise Security too.&lt;/P&gt;

&lt;P&gt;Health Status from the search head is showing a yellow for splunkd  - data forwarding (I assume to that indexer?)&lt;/P&gt;

&lt;P&gt;Health status on that indexer is showing a Red  for buckets.&lt;/P&gt;

&lt;P&gt;The percentage of small of buckets created (60) over the last hour is very high and exceeded the red thresholds (50) for index=app_logs, and possibly more indexes, on this indexer&lt;/P&gt;

&lt;P&gt;So I'm not sure why its creating lots of small buckets - is this related to how we setup inputs?&lt;/P&gt;</description>
      <pubDate>Mon, 12 Nov 2018 02:43:33 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/1-out-of-2-indexer-has-high-RAM-utilisation/m-p/390437#M69857</guid>
      <dc:creator>mjm295</dc:creator>
      <dc:date>2018-11-12T02:43:33Z</dc:date>
    </item>
    <item>
      <title>Re: 1 out of 2 indexer has high RAM utilisation</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/1-out-of-2-indexer-has-high-RAM-utilisation/m-p/390438#M69858</link>
      <description>&lt;P&gt;indexes.conf for the bucket:&lt;/P&gt;

&lt;P&gt;[app_logs]&lt;BR /&gt;
homePath = $SPLUNK_DB/app_logs/db&lt;BR /&gt;
coldPath = $SPLUNK_DB/app_logs/colddb&lt;BR /&gt;
thawedPath = $SPLUNK_DB/app_logs/thaweddb&lt;BR /&gt;
frozenTimePeriodInSecs = 31557600&lt;BR /&gt;
disabled = 0&lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 22:02:32 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/1-out-of-2-indexer-has-high-RAM-utilisation/m-p/390438#M69858</guid>
      <dc:creator>mjm295</dc:creator>
      <dc:date>2020-09-29T22:02:32Z</dc:date>
    </item>
  </channel>
</rss>

