Thanks for your answer, however, we are facing an issue where there is enough space in our index but our disk space has reached around 80%. SO I just want to know if volume trimming happens on the disk level as well ? Below attached are our index configuration for paloalto index and the disk status.
[firewall_paloalto]
coldPath = volume:cold\firewall_paloalto\colddb
homePath = volume:hotwarm\firewall_paloalto\db
thawedPath = D:\splunk_data\firewall_paloalto\thaweddb
tstatsHomePath = volume:hotwarm\firewall_paloalto\datamodel_summary
frozenTimePeriodInSecs = 47304000
maxTotalDataSizeMB = 4294967295
When buckets age out and are frozen (deleted) then disk space will be restored. The buckets need to be at least 1.5 years old before they will be deleted, however, given the frozenTimePeriodInSecs setting.
Buckets also will be deleted as needed to stay within the maxTotalDataSizeMB setting, but it may take a long time to fill 4PB (depending on your ingest rate).
You may want to confirm the settings are appropriate for the index.
Thanks for the answer , but the problem is we have enough storage for index but still its trimming data . And disk space is used around 80% , So i want to know whether volume trimming happens on the disk level as well.
There are many settings that factor into when data is reaped, which makes it a bit complicated. It's further complicated if you use volumes or SmartStore.
Can you share the indexes.conf stanza for the index and the [default] indexes.conf stanza?
Are you talking about this?
[firewall_paloalto]
coldPath = volume:cold\firewall_paloalto\colddb
homePath = volume:hotwarm\firewall_paloalto\db
thawedPath = D:\splunk_data\firewall_paloalto\thaweddb
tstatsHomePath = volume:hotwarm\firewall_paloalto\datamodel_summary
frozenTimePeriodInSecs = 47304000
maxTotalDataSizeMB = 4294967295
Is there also a [default] stanza in indexes.conf? What are the volume settings?
this is default stanza
[default]
enableDataIntegrityControl = true
frozenTimePeriodInSecs = 47304000
repFactor = auto
maxWarmDBCount = 80
maxTotalDataSizeMB = 4294967295
[volume:hotwarm]
path = /opt/index_data/splunk_data
[volume:cold]
path = /opt/index_data/splunk_data
[volume:tstats]
path = /opt/index_data/splunk_data_tstats
The volume settings should include maxVolumeDataSizeMB so Splunk knows how large the volume is (or at least how much it can use). Each index can use individual maxTotalDataSizeMB settings to control how much of the volume they can consume.
Why the data is being trimmed if the index have enough space to store new data as well as old data?
Perhaps, in the absence of maxVolumeDataSizeMB, Splunk is using a low value for the size of the volume and trimming data to "fit" that lower value.