<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Clarification - indexes.conf in Deployment Architecture</title>
    <link>https://community.splunk.com/t5/Deployment-Architecture/Clarification-indexes-conf/m-p/356479#M13199</link>
    <description>&lt;P&gt;Let's go one by one:&lt;/P&gt;

&lt;P&gt;1: Will the &lt;CODE&gt;maxTotalDataSizeMB&lt;/CODE&gt; or &lt;CODE&gt;maxVolumeDataSizeMB&lt;/CODE&gt; gets precedence ?&lt;BR /&gt;
A: &lt;CODE&gt;maxTotalDataSizeMB&lt;/CODE&gt; is equal to &lt;CODE&gt;maxVolumeDataSizeMB&lt;/CODE&gt;; in other words, whichever you hit first will be enforced.&lt;/P&gt;

&lt;P&gt;2a: As per my understanding, &lt;CODE&gt;maxVolumeDataSizeMB&lt;/CODE&gt; is the total size of databases this directory can hold.&lt;BR /&gt;
A: No, it is the total of EVERYTHING, not just "databases"; if you dump a 20TB tarfile in there, then you have no room for any "database" buckets.&lt;BR /&gt;
2b: In other words, it can store hot_warm data till it reaches 1.45 TB and the databases will then roll into cold (which can hold 9.4 TB). Is it right ?&lt;BR /&gt;
A: Yes, assuming nothing else non-Splunk is also using that directory (it should not be).&lt;/P&gt;

&lt;P&gt;3: Would the pan index will be stored only for 20TB across all (hot_warm and cold)?&lt;BR /&gt;
A: Yes, exactly.&lt;/P&gt;

&lt;P&gt;4a: Keeping the &lt;CODE&gt;maxDataSize&lt;/CODE&gt; as &lt;CODE&gt;auto&lt;/CODE&gt;, I believe there will be 300 hot_warm buckets of 750 MB each in the hot_warm volume. Is it right?&lt;BR /&gt;
A: The &lt;CODE&gt;auto&lt;/CODE&gt; setting sets the size to &lt;CODE&gt;750MB&lt;/CODE&gt; for bucket size, which in your case means 1450000/750 ~ 1933 buckets (almost all of these will be &lt;CODE&gt;warm&lt;/CODE&gt;).&lt;BR /&gt;
4b: And if i make it to &lt;CODE&gt;auto_high_volume&lt;/CODE&gt;, would there be 300 hot_warm buckets of 10GB each ? If that is the case would it be keeping a lot of hot_warm data as compared to how it is being kept now ?&lt;BR /&gt;
A: The &lt;CODE&gt;auto_high_volume&lt;/CODE&gt; sets the size to &lt;CODE&gt;10GB&lt;/CODE&gt; on 64-bit, or &lt;CODE&gt;1GB&lt;/CODE&gt; on 32-bit systems, which in your case (assuming 64-bit) means 1450000/10240 ~ 141 buckets.&lt;/P&gt;

&lt;P&gt;5: Is the &lt;CODE&gt;maxWarmDBCount&lt;/CODE&gt; 300 by default ?&lt;BR /&gt;
A: No, the default is &lt;CODE&gt;200&lt;/CODE&gt;.&lt;/P&gt;

&lt;P&gt;NOTE: You are using &lt;CODE&gt;MB*1000*1000*&lt;/CODE&gt; for TB which is not correct (it is &lt;CODE&gt;MB*1024*1024&lt;/CODE&gt;).&lt;BR /&gt;
Also note, The maximum size of your warm buckets may slightly exceed &lt;CODE&gt;maxDataSize&lt;/CODE&gt;, due to post-processing and timing issues with the rolling policy.&lt;BR /&gt;
Also note, some settings may vary from version to version.&lt;/P&gt;</description>
    <pubDate>Tue, 29 Sep 2020 13:53:43 GMT</pubDate>
    <dc:creator>woodcock</dc:creator>
    <dc:date>2020-09-29T13:53:43Z</dc:date>
    <item>
      <title>Clarification - indexes.conf</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Clarification-indexes-conf/m-p/356478#M13198</link>
      <description>&lt;P&gt;This is my indexes.conf configuration&lt;/P&gt;

&lt;P&gt;[volume:hot_warm]&lt;BR /&gt;
path = /store/hot_warm&lt;BR /&gt;
maxVolumeDataSizeMB = 1450000&lt;/P&gt;

&lt;P&gt;[volume:cold]&lt;BR /&gt;
path = /store/cold&lt;BR /&gt;
maxVolumeDataSizeMB = 9400000&lt;/P&gt;

&lt;P&gt;[pan]&lt;BR /&gt;
homePath   = volume:hot_warm/pan/db&lt;BR /&gt;
coldPath   = volume:cold/pan/colddb&lt;BR /&gt;
tstatsHomePath = volume:hot_warm/pan/datamodel_summary&lt;BR /&gt;
thawedPath = /restore/pan/thaweddb&lt;BR /&gt;
coldToFrozenDir = /store/cold/archive/pan&lt;BR /&gt;
maxDataSize = auto&lt;BR /&gt;
frozenTimePeriodInSecs = 31536000 -&amp;gt; When data is moved from cold to frozen. That is after 1 year.&lt;BR /&gt;
maxTotalDataSizeMB = 20000000 -&amp;gt; Maximum size of an index. That is 20 TB.&lt;BR /&gt;
enableTsidxReduction = true -&amp;gt; Reduces size of TSIDX files. Results in reduced bucket size but are slower while searching.&lt;BR /&gt;
timePeriodInSecBeforeTsidxReduction = 2592000 - &amp;gt; After 30 Days, TSIDX gets enabled.&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;In this case, will the maxTotalDataSizeMB gets precedence or maxVolumeDataSizeMB gets precendece ?&lt;/STRONG&gt;&lt;/P&gt;

&lt;P&gt;As per my understanding, maxVolumeDataSizeMB: total size of databases this directory can hold. In other words, it can store hot_warm data till it reaches 1.45 TB and the databases will then roll into cold (which can hold 9.4 TB). &lt;STRONG&gt;Is it right ?&lt;/STRONG&gt;&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;Would the pan index will be stored only for 20TB across all (hot_warm and cold)?&lt;/STRONG&gt;&lt;BR /&gt;
&lt;STRONG&gt;Also, keeping the maxDataSize as auto, I believe there will be 300 hot_warm buckets of 750 MB each in the hot_warm colume. Is it right&lt;/STRONG&gt; ?&lt;BR /&gt;
&lt;STRONG&gt;And if i make it to auto_high_volume, would there be 300 hot_warm buckets of 10GB each ? If that is the case would it be keeping a lot of hot_warm data as compared to how it is being kept now ?&lt;/STRONG&gt;&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;Is the maxWarmDBCount 300 by default ?&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 13:53:38 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Clarification-indexes-conf/m-p/356478#M13198</guid>
      <dc:creator>vr2312</dc:creator>
      <dc:date>2020-09-29T13:53:38Z</dc:date>
    </item>
    <item>
      <title>Re: Clarification - indexes.conf</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Clarification-indexes-conf/m-p/356479#M13199</link>
      <description>&lt;P&gt;Let's go one by one:&lt;/P&gt;

&lt;P&gt;1: Will the &lt;CODE&gt;maxTotalDataSizeMB&lt;/CODE&gt; or &lt;CODE&gt;maxVolumeDataSizeMB&lt;/CODE&gt; gets precedence ?&lt;BR /&gt;
A: &lt;CODE&gt;maxTotalDataSizeMB&lt;/CODE&gt; is equal to &lt;CODE&gt;maxVolumeDataSizeMB&lt;/CODE&gt;; in other words, whichever you hit first will be enforced.&lt;/P&gt;

&lt;P&gt;2a: As per my understanding, &lt;CODE&gt;maxVolumeDataSizeMB&lt;/CODE&gt; is the total size of databases this directory can hold.&lt;BR /&gt;
A: No, it is the total of EVERYTHING, not just "databases"; if you dump a 20TB tarfile in there, then you have no room for any "database" buckets.&lt;BR /&gt;
2b: In other words, it can store hot_warm data till it reaches 1.45 TB and the databases will then roll into cold (which can hold 9.4 TB). Is it right ?&lt;BR /&gt;
A: Yes, assuming nothing else non-Splunk is also using that directory (it should not be).&lt;/P&gt;

&lt;P&gt;3: Would the pan index will be stored only for 20TB across all (hot_warm and cold)?&lt;BR /&gt;
A: Yes, exactly.&lt;/P&gt;

&lt;P&gt;4a: Keeping the &lt;CODE&gt;maxDataSize&lt;/CODE&gt; as &lt;CODE&gt;auto&lt;/CODE&gt;, I believe there will be 300 hot_warm buckets of 750 MB each in the hot_warm volume. Is it right?&lt;BR /&gt;
A: The &lt;CODE&gt;auto&lt;/CODE&gt; setting sets the size to &lt;CODE&gt;750MB&lt;/CODE&gt; for bucket size, which in your case means 1450000/750 ~ 1933 buckets (almost all of these will be &lt;CODE&gt;warm&lt;/CODE&gt;).&lt;BR /&gt;
4b: And if i make it to &lt;CODE&gt;auto_high_volume&lt;/CODE&gt;, would there be 300 hot_warm buckets of 10GB each ? If that is the case would it be keeping a lot of hot_warm data as compared to how it is being kept now ?&lt;BR /&gt;
A: The &lt;CODE&gt;auto_high_volume&lt;/CODE&gt; sets the size to &lt;CODE&gt;10GB&lt;/CODE&gt; on 64-bit, or &lt;CODE&gt;1GB&lt;/CODE&gt; on 32-bit systems, which in your case (assuming 64-bit) means 1450000/10240 ~ 141 buckets.&lt;/P&gt;

&lt;P&gt;5: Is the &lt;CODE&gt;maxWarmDBCount&lt;/CODE&gt; 300 by default ?&lt;BR /&gt;
A: No, the default is &lt;CODE&gt;200&lt;/CODE&gt;.&lt;/P&gt;

&lt;P&gt;NOTE: You are using &lt;CODE&gt;MB*1000*1000*&lt;/CODE&gt; for TB which is not correct (it is &lt;CODE&gt;MB*1024*1024&lt;/CODE&gt;).&lt;BR /&gt;
Also note, The maximum size of your warm buckets may slightly exceed &lt;CODE&gt;maxDataSize&lt;/CODE&gt;, due to post-processing and timing issues with the rolling policy.&lt;BR /&gt;
Also note, some settings may vary from version to version.&lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 13:53:43 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Clarification-indexes-conf/m-p/356479#M13199</guid>
      <dc:creator>woodcock</dc:creator>
      <dc:date>2020-09-29T13:53:43Z</dc:date>
    </item>
    <item>
      <title>Re: Clarification - indexes.conf</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Clarification-indexes-conf/m-p/356480#M13200</link>
      <description>&lt;P&gt;@woodcock&lt;/P&gt;

&lt;P&gt;You meant to say that maxVolumeDataSizeMB has precedence over (the potentially multiple, added together) maxTotalDataSizeMB ?&lt;/P&gt;</description>
      <pubDate>Mon, 01 May 2017 11:23:03 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Clarification-indexes-conf/m-p/356480#M13200</guid>
      <dc:creator>vr2312</dc:creator>
      <dc:date>2017-05-01T11:23:03Z</dc:date>
    </item>
    <item>
      <title>Re: Clarification - indexes.conf</title>
      <link>https://community.splunk.com/t5/Deployment-Architecture/Clarification-indexes-conf/m-p/356481#M13201</link>
      <description>&lt;P&gt;It is kind of both so I reworded my answer.&lt;/P&gt;</description>
      <pubDate>Tue, 24 Oct 2017 00:35:49 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Deployment-Architecture/Clarification-indexes-conf/m-p/356481#M13201</guid>
      <dc:creator>woodcock</dc:creator>
      <dc:date>2017-10-24T00:35:49Z</dc:date>
    </item>
  </channel>
</rss>

