Lets say my colddb space is 15TB and volume datasize is 20TB as below (indexer.conf) what will be issues it may cause ? or it is ok ?
df -h | grep sde
sde 8:64 0 32T 0 disk
-sde1 8:65 0 15T 0 part /apps/splunk/colddb
On the Indexer Cluster Master server :
path = /apps/splunk/colddb
maxVolumeDataSizeMB = 20000000
Thanks @isoutamo . Agree.
But will be any significant problem , if i define "maxVolumeDataSizeMB" more than "/apps/splunk/colddb" ?
FS >> /apps/splunk/colddb = 15 TB
Indexes.conf >> maxVolumeDataSizeMB = 20 TB
If/when sum(all indexes cold.path.size) < max vol size then probably no, but I suppose that this in not reality! When sum(size of all indexes cold path) > max vol size - 5000 your node stop indexing until you make more time for it.
So you really should keep that max volume size enough low (less than FS size - 5+%) to take the advantage of use volumes. Otherwise there is no sense to use those.
when you have define correctly volume max size then there isn’t any issues. You should check size by “df -BM /path/to/volume” and you will get size as MB. You should leave some space free for filesystem, never set max size as FS size! If/when you have heavy traffic you need to have time for bucket housekeeping , otherwise your indexing could stop when FS becomes full. Also FS need some space for its internal “stuff”. Usually that should be 5-15%.