<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: How does the volume size maxVolumeDataSizeMB apply if you have a mix of volumes and indexes paths ? in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/How-does-the-volume-size-maxVolumeDataSizeMB-apply-if-you-have-a/m-p/358807#M65504</link>
    <description>&lt;P&gt;After testing and researching confirm. Here are the conclusions : &lt;/P&gt;

&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Volumes definitions are logical.&lt;/STRONG&gt; 
When measuring the volume size, splunk will &lt;STRONG&gt;only count the size of the indexes&lt;/STRONG&gt; (coldPath, homePath, thawedPath or tstatsHomePath) &lt;STRONG&gt;that are defined using this volume&lt;/STRONG&gt;. &lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;example : &lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[volume:testvolumeA] 
path = /mount/disk 
maxVolumeDataSizeMB=500 

[index1] 
homePath = volume:testvolumeA/index1/db 
coldPath = volume:testvolumeA/index1/colddb 
thawedPath = volume:testvolumeA/index1/thaweddb 
tstatsHomePath = volume:_splunk_summaries/defaultdb/datamodel_summary 
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;in this case index1 homePath, coldPath and thawedPath will be considered on the same logical volume. &lt;/P&gt;

&lt;P&gt;To enforce the possible volume size limit, only the previous indexes locations will be summed up, and when a bucket has to be frozen, it will be one of the buckets defined on this logical location. &lt;/P&gt;

&lt;UL&gt;
&lt;LI&gt;Now the possible situation is when you have : &lt;STRONG&gt;several volumes that are pointing to the same path&lt;/STRONG&gt; :&lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;example :&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[volume:testvolumeA] 
path = /mount/disk 
maxVolumeDataSizeMB=500 
[volume:testvolumeB] 
path = /mount/disk 
maxVolumeDataSizeMB=100 

[index1] 
homePath = volume:testvolumeA/index1/db 
coldPath = volume:testvolumeA/index1/colddb 
thawedPath = volume:testvolumeA/index1/thaweddb 
tstatsHomePath = volume:_splunk_summaries/defaultdb/datamodel_summary 

[index2] 
homePath = volume:testvolumeB/index2/db 
coldPath = volume:testvolumeB/index2/colddb 
thawedPath = volume:testvolumeB/index2/thaweddb 
tstatsHomePath = volume:_splunk_summaries/defaultdb/datamodel_summary 
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;The 2 volumes testvolumeA and testvolumeB will be both &lt;STRONG&gt;monitored as 2 separate entities&lt;/STRONG&gt;. and each of them will only measure the subfolders defined using the volume. &lt;/P&gt;

&lt;P&gt;That means that if you do enforce a volume size limit, they both apply there limits separately, to their specific indexes folders. &lt;BR /&gt;
In my example &lt;BR /&gt;
testvolumeA will keep it's monitored sub folders under 500MB &lt;BR /&gt;
testvolumeB will keep it's monitored sub folders under 100MB &lt;BR /&gt;
This mean that the actual physical path /mount/disk can grow up to 500+100MB = 600MB &lt;/P&gt;

&lt;P&gt;I think that this will be the situation if you use a volume pointing to $SPLUNK_DB, as _splunk_summaries are also using it :&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[volume:_splunk_summaries] 
path = $SPLUNK_DB 
[volume:summary] 
path = $SPLUNK_DB 
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;So you can estimate your volumes size limits to ensure that the sum of them will not fill your physical disk. &lt;BR /&gt;
Or you can redefine all your path to use a single volume, and manage the size globally. &lt;/P&gt;

&lt;UL&gt;
&lt;LI&gt;Another situation is &lt;STRONG&gt;when you mix a volume with a path&lt;/STRONG&gt;. &lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;example :&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[volume:testvolumeA] 
path = /mount/disk 
maxVolumeDataSizeMB=500 

[index1] 
homePath = volume:testvolumeA/index1/db 
coldPath = volume:testvolumeA/index1/colddb 
thawedPath = volume:testvolumeA/index1/thaweddb 
tstatsHomePath = volume:_splunk_summaries/defaultdb/datamodel_summary 

[_internal] 
homePath = $SPLUNK_DB/_internaldb/db 
coldPath = $SPLUNK_DB/_internaldb/colddb 
thawedPath = $SPLUNK_DB/_internaldb/thaweddb 
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;In this case you will see a warning in the splunkd.log for the _internal index homePath and coldPath and thawedPath, as they are not on a volume, but are on the same path that a volume. &lt;/P&gt;

&lt;P&gt;example : &lt;/P&gt;

&lt;BLOCKQUOTE&gt;
&lt;P&gt;06-07-2017 16:27:01.976 -0700 WARN ProcessTracker - (child_6__Fsck) IndexConfig - idx=summary Path homePath='/mount/disk/_internaldb/db' (realpath '/mount/disk/_internaldb/db') is inside volume=testvolumeA (path='/mount/disk', realpath='/mount/disk'), but does not reference that volume. Space used by homePath will &lt;EM&gt;not&lt;/EM&gt; be volume-mananged. Please check indexes.conf for configuration errors. &lt;/P&gt;
&lt;/BLOCKQUOTE&gt;

&lt;UL&gt;
&lt;LI&gt;Why is the tstatsHomePath volumes not throwing errors like the others out of the box ?&lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;However it appears that the warning only exist for homePath and coldPath and thawedPath, it does not exists for tstatsHomePath. &lt;BR /&gt;
This is why we do not get those errors on a vanilla splunk install, as by default we have the tstatsHomePath using the volume &lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[volume:_splunk_summaries] 
path = $SPLUNK_DB 
&lt;/CODE&gt;&lt;/PRE&gt;</description>
    <pubDate>Tue, 29 Sep 2020 14:32:14 GMT</pubDate>
    <dc:creator>yannK</dc:creator>
    <dc:date>2020-09-29T14:32:14Z</dc:date>
    <item>
      <title>How does the volume size maxVolumeDataSizeMB apply if you have a mix of volumes and indexes paths ?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-does-the-volume-size-maxVolumeDataSizeMB-apply-if-you-have-a/m-p/358806#M65503</link>
      <description>&lt;P&gt;I want to use Volumes in indexes.conf to limit the space used by my indexes.&lt;/P&gt;

&lt;P&gt;On each index, I see 4 paths : homePath / coldPath / thawedPath / tstatsHomePath &lt;BR /&gt;
the last one seems to be used for the accelerated datamodels or report accelerations.&lt;/P&gt;

&lt;P&gt;How does this works ?&lt;/P&gt;

&lt;UL&gt;
&lt;LI&gt;I noticed that they are several paths possible, and some of them (the summary) are already using volumes, that happen to point on the default $SPLUNK_DB path. 

&lt;UL&gt;
&lt;LI&gt;Does a volume considers the other folders that not managed by splunk&lt;/LI&gt;
&lt;LI&gt;Does a volume considers the other folder in the same location if the use paths (instead of volumes) ?&lt;/LI&gt;
&lt;/UL&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 20 Jun 2017 17:43:12 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-does-the-volume-size-maxVolumeDataSizeMB-apply-if-you-have-a/m-p/358806#M65503</guid>
      <dc:creator>yannK</dc:creator>
      <dc:date>2017-06-20T17:43:12Z</dc:date>
    </item>
    <item>
      <title>Re: How does the volume size maxVolumeDataSizeMB apply if you have a mix of volumes and indexes paths ?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-does-the-volume-size-maxVolumeDataSizeMB-apply-if-you-have-a/m-p/358807#M65504</link>
      <description>&lt;P&gt;After testing and researching confirm. Here are the conclusions : &lt;/P&gt;

&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Volumes definitions are logical.&lt;/STRONG&gt; 
When measuring the volume size, splunk will &lt;STRONG&gt;only count the size of the indexes&lt;/STRONG&gt; (coldPath, homePath, thawedPath or tstatsHomePath) &lt;STRONG&gt;that are defined using this volume&lt;/STRONG&gt;. &lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;example : &lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[volume:testvolumeA] 
path = /mount/disk 
maxVolumeDataSizeMB=500 

[index1] 
homePath = volume:testvolumeA/index1/db 
coldPath = volume:testvolumeA/index1/colddb 
thawedPath = volume:testvolumeA/index1/thaweddb 
tstatsHomePath = volume:_splunk_summaries/defaultdb/datamodel_summary 
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;in this case index1 homePath, coldPath and thawedPath will be considered on the same logical volume. &lt;/P&gt;

&lt;P&gt;To enforce the possible volume size limit, only the previous indexes locations will be summed up, and when a bucket has to be frozen, it will be one of the buckets defined on this logical location. &lt;/P&gt;

&lt;UL&gt;
&lt;LI&gt;Now the possible situation is when you have : &lt;STRONG&gt;several volumes that are pointing to the same path&lt;/STRONG&gt; :&lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;example :&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[volume:testvolumeA] 
path = /mount/disk 
maxVolumeDataSizeMB=500 
[volume:testvolumeB] 
path = /mount/disk 
maxVolumeDataSizeMB=100 

[index1] 
homePath = volume:testvolumeA/index1/db 
coldPath = volume:testvolumeA/index1/colddb 
thawedPath = volume:testvolumeA/index1/thaweddb 
tstatsHomePath = volume:_splunk_summaries/defaultdb/datamodel_summary 

[index2] 
homePath = volume:testvolumeB/index2/db 
coldPath = volume:testvolumeB/index2/colddb 
thawedPath = volume:testvolumeB/index2/thaweddb 
tstatsHomePath = volume:_splunk_summaries/defaultdb/datamodel_summary 
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;The 2 volumes testvolumeA and testvolumeB will be both &lt;STRONG&gt;monitored as 2 separate entities&lt;/STRONG&gt;. and each of them will only measure the subfolders defined using the volume. &lt;/P&gt;

&lt;P&gt;That means that if you do enforce a volume size limit, they both apply there limits separately, to their specific indexes folders. &lt;BR /&gt;
In my example &lt;BR /&gt;
testvolumeA will keep it's monitored sub folders under 500MB &lt;BR /&gt;
testvolumeB will keep it's monitored sub folders under 100MB &lt;BR /&gt;
This mean that the actual physical path /mount/disk can grow up to 500+100MB = 600MB &lt;/P&gt;

&lt;P&gt;I think that this will be the situation if you use a volume pointing to $SPLUNK_DB, as _splunk_summaries are also using it :&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[volume:_splunk_summaries] 
path = $SPLUNK_DB 
[volume:summary] 
path = $SPLUNK_DB 
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;So you can estimate your volumes size limits to ensure that the sum of them will not fill your physical disk. &lt;BR /&gt;
Or you can redefine all your path to use a single volume, and manage the size globally. &lt;/P&gt;

&lt;UL&gt;
&lt;LI&gt;Another situation is &lt;STRONG&gt;when you mix a volume with a path&lt;/STRONG&gt;. &lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;example :&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[volume:testvolumeA] 
path = /mount/disk 
maxVolumeDataSizeMB=500 

[index1] 
homePath = volume:testvolumeA/index1/db 
coldPath = volume:testvolumeA/index1/colddb 
thawedPath = volume:testvolumeA/index1/thaweddb 
tstatsHomePath = volume:_splunk_summaries/defaultdb/datamodel_summary 

[_internal] 
homePath = $SPLUNK_DB/_internaldb/db 
coldPath = $SPLUNK_DB/_internaldb/colddb 
thawedPath = $SPLUNK_DB/_internaldb/thaweddb 
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;In this case you will see a warning in the splunkd.log for the _internal index homePath and coldPath and thawedPath, as they are not on a volume, but are on the same path that a volume. &lt;/P&gt;

&lt;P&gt;example : &lt;/P&gt;

&lt;BLOCKQUOTE&gt;
&lt;P&gt;06-07-2017 16:27:01.976 -0700 WARN ProcessTracker - (child_6__Fsck) IndexConfig - idx=summary Path homePath='/mount/disk/_internaldb/db' (realpath '/mount/disk/_internaldb/db') is inside volume=testvolumeA (path='/mount/disk', realpath='/mount/disk'), but does not reference that volume. Space used by homePath will &lt;EM&gt;not&lt;/EM&gt; be volume-mananged. Please check indexes.conf for configuration errors. &lt;/P&gt;
&lt;/BLOCKQUOTE&gt;

&lt;UL&gt;
&lt;LI&gt;Why is the tstatsHomePath volumes not throwing errors like the others out of the box ?&lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;However it appears that the warning only exist for homePath and coldPath and thawedPath, it does not exists for tstatsHomePath. &lt;BR /&gt;
This is why we do not get those errors on a vanilla splunk install, as by default we have the tstatsHomePath using the volume &lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;[volume:_splunk_summaries] 
path = $SPLUNK_DB 
&lt;/CODE&gt;&lt;/PRE&gt;</description>
      <pubDate>Tue, 29 Sep 2020 14:32:14 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-does-the-volume-size-maxVolumeDataSizeMB-apply-if-you-have-a/m-p/358807#M65504</guid>
      <dc:creator>yannK</dc:creator>
      <dc:date>2020-09-29T14:32:14Z</dc:date>
    </item>
  </channel>
</rss>

