Security

data trim

Siddharthnegi
Communicator

Thanks for your answer, however, we are facing an issue where there is enough space in our index but our disk space has reached around 80%. SO I just want to know if volume trimming happens on the disk level as well ? Below attached are our index configuration for paloalto index and the disk status.

 

[firewall_paloalto]
coldPath = volume:cold\firewall_paloalto\colddb
homePath = volume:hotwarm\firewall_paloalto\db
thawedPath = D:\splunk_data\firewall_paloalto\thaweddb
tstatsHomePath = volume:hotwarm\firewall_paloalto\datamodel_summary

frozenTimePeriodInSecs = 47304000

maxTotalDataSizeMB = 4294967295

Labels (1)
0 Karma

richgalloway
SplunkTrust
SplunkTrust

When buckets age out and are frozen (deleted) then disk space will be restored.  The buckets need to be at least 1.5 years old before they will be deleted, however, given the frozenTimePeriodInSecs setting.

Buckets also will be deleted as needed to stay within the maxTotalDataSizeMB setting, but it may take a long time to fill 4PB (depending on your ingest rate).

You may want to confirm the settings are appropriate for the index.

---
If this reply helps you, Karma would be appreciated.

Siddharthnegi
Communicator

Thanks for the answer , but the problem is we have enough storage for index but still its trimming data . And disk space is used around 80% , So i want to know whether volume trimming happens on the disk level as well.

0 Karma

richgalloway
SplunkTrust
SplunkTrust

There are many settings that factor into when data is reaped, which makes it a bit complicated.  It's further complicated if you use volumes or SmartStore.

Can you share the indexes.conf stanza for the index and the [default] indexes.conf stanza?

---
If this reply helps you, Karma would be appreciated.

Siddharthnegi
Communicator

Are you talking about this?

 

[firewall_paloalto]
coldPath = volume:cold\firewall_paloalto\colddb
homePath = volume:hotwarm\firewall_paloalto\db
thawedPath = D:\splunk_data\firewall_paloalto\thaweddb
tstatsHomePath = volume:hotwarm\firewall_paloalto\datamodel_summary

frozenTimePeriodInSecs = 47304000

maxTotalDataSizeMB = 4294967295

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Is there also a [default] stanza in indexes.conf?  What are the volume settings?

---
If this reply helps you, Karma would be appreciated.

Siddharthnegi
Communicator

this is default stanza

 

[default]
enableDataIntegrityControl = true
frozenTimePeriodInSecs = 47304000
repFactor = auto
maxWarmDBCount = 80
maxTotalDataSizeMB = 4294967295

 

[volume:hotwarm]
path = /opt/index_data/splunk_data

 

[volume:cold]
path = /opt/index_data/splunk_data

 

[volume:tstats]
path = /opt/index_data/splunk_data_tstats

0 Karma

richgalloway
SplunkTrust
SplunkTrust

The volume settings should include maxVolumeDataSizeMB so Splunk knows how large the volume is (or at least how much it can use).  Each index can use individual maxTotalDataSizeMB settings to control how much of the volume they can consume.

---
If this reply helps you, Karma would be appreciated.

Siddharthnegi
Communicator

Why the data is being  trimmed if the index have enough space to store new data as well as old data?

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Perhaps, in the absence of maxVolumeDataSizeMB, Splunk is using a low value for the size of the volume and trimming data to "fit" that lower value.

---
If this reply helps you, Karma would be appreciated.
Get Updates on the Splunk Community!

Splunk Platform | Upgrading your Splunk Deployment to Python 3.9

Splunk initially announced the removal of Python 2 during the release of Splunk Enterprise 8.0.0, aiming to ...

From Product Design to User Insights: Boosting App Developer Identity on Splunkbase

co-authored by Yiyun Zhu & Dan Hosaka Engaging with the Community at .conf24 At .conf24, we revitalized the ...

Detect and Resolve Issues in a Kubernetes Environment

We’ve gone through common problems one can encounter in a Kubernetes environment, their impacts, and the ...