Knowledge Management
Highlighted

Volume setting "maxVolumeDataSizeMB" no longer trims non-SmartStore index when sharing volume with SmartStore index

Splunk Employee
Splunk Employee

We migrated a single index to SmartStore about 3 months ago. It appears that since upgrading to v8.0.3 recently, that data retention policies are not applying to local volumes.

I see this in splunkd.log:

05-14-2020 09:54:57.707 -0700 WARN VolumeManager - Not trimming volume=splunk_coldStorage. Using maxVolumeDataSizeMB setting is ignored for volumes containing remote-storage enabled indexes. Please revisit your volume settings.

From the message in the log, it seems that the smartstore config and local volume configs are in conflict. I am not entirely sure how to correct this. Relevant entries from indexes.conf:

my configuration is

[volume:s3]
storageType = remote
path = s3://…….

[volume:test_indexes]
path = $SPLUNK_DB
maxVolumeDataSizeMB = 700000

[AccessProtection]
coldPath = volume:test_indexes/AccessProtection/colddb
homePath = volume:test_indexes/AccessProtection/db
thawedPath = $SPLUNK_DB/AccessProtection/thaweddb
repFactor = auto
frozenTimePeriodInSecs = 3456000
enableDataIntegrityControl = true
maxDataSize = 200
...
[floating-point-index]
remote.s3.encryption = sse-kms
remotePath = volume:s3/floating-point
coldPath = volume:test_indexes/floating-point/colddb
datatype = metric
homePath = volume:test_indexes/floating-point/db
maxTotalDataSizeMB = 512000
repFactor = auto
thawedPath = $SPLUNK_DB/floating-point/thaweddb
maxDataSize = 200
Labels (1)
Tags (1)
0 Karma
Highlighted

Re: Volume setting "maxVolumeDataSizeMB" no longer trims non-SmartStore index when sharing volume with SmartStore index

Splunk Employee
Splunk Employee

VolumeManger trim operations are not compatible with S2 and can lead to unpredictable behavior.We should not be mixing S2 indexes and non-S2 indexes on the same volume with maxVolumeDataSizeMB.

After separate the indexes to a different volume, the volume manager begins to trimming the exceeding data.

[volume:s3]
storageType = remote
path = s3:/......

#separate s3 indexes and non-s3 indexes for maxVolumeDataSizeMB to work on non-s3 indexes

[volume:s3_indexes]
path=$SPLUNK_DB
maxVolumeDataSizeMB = 7000

[volume:test_indexes]
path = $SPLUNK_DB
maxVolumeDataSizeMB = 700000

[DA-ESS-AccessProtection]
coldPath = volume:test_indexes/AccessProtection/colddb
homePath = volume:test_indexes/AccessProtection/db
thawedPath = $SPLUNK_DB/AccessProtection/thaweddb
repFactor = auto
frozenTimePeriodInSecs = 3456000
enableDataIntegrityControl = true
maxDataSize = 200
...

[floating-point-index]
remote.s3.encryption = sse-kms
remotePath = volume:s3/floating-point
coldPath = volume:s3_indexes/floating-point/colddb
datatype = metric
homePath = volume:s3_indexes/floating-point/db
maxTotalDataSizeMB = 512000
repFactor = auto
thawedPath = $SPLUNK_DB/floating-point/thaweddb
maxDataSize = 200
0 Karma