Deployment Architecture

Moving warm buckets to cold Buckets manually

johnlzy0408
Loves-to-Learn Everything

My indexer is totally full now and new items cannot be index. 

The previous settings also seems to be not working. 

[root@splunk-masternode local]# cat indexes.conf
homePath.maxDataSizeMB = 80000

 

# Hot and Cold - External data sources
[volume:secondary]
path = /splunk/splunkdata
maxVolumeDataSizeMB = 1650000

 

I have tune the maxDataSizeMB to 40000 instead and the maxVolumeDataSizeMB to 155000 instead and restarted but its not clearing off. 

 

/dev/mapper/splunk_hotbucket-hotbucket 1.8T 1.7T 4.9G 100% /splunk/splunkdata

The 1.65T limit also seems to be not working as its now 1.7T. 

 

Anybody have any advise? This is currently my 2 indexes.conf settings. 

[root@splunk-masternode local]# cat indexes.conf
# VOLUME SETTINGS
# In this example, the volume spec here is set to the indexer-specific
# path for data storage. It satisfies the "volume:primary" tag used in
# the indexes.conf which is shared between SH and indexers.
# See also: org_all_indexes


# One Volume for Hot and Cold - Splunk default internal indexes
[volume:primary]
path = /splunk/splunkdata_internal
# Note: The *only* reason to use a volume is to set a cumulative size-based
# limit across several indexes stored on the same partition. There are *not*
# time-based volume limits.
# ~5 TB
maxVolumeDataSizeMB = 5120


# Hot and Cold - External data sources
[volume:secondary]
path = /splunk/splunkdata
maxVolumeDataSizeMB = 1550000


[volume:cold]
path = /splunk/splunkdata_cold


#[volume:frozen]
#path = /splunk/splunkdata_frozen


# This setting changes the storage location for _splunk_summaries,
# which should be utilized if you want to use the same partition
# as specified for volume settings. Otherwise defaults to $SPLUNK_DB.
#
# The size setting of the volume shown below would place a limit on the
# total size of data model acceleration (DMA) data. Doing so should be
# carefully considered as it may have a negative impact on appilcations
# like Enterprise Security.
#
[volume:_splunk_summaries]
path = /splunk/splunkdata
# ~ 100GB
# maxVolumeDataSizeMB = 100000

 

homePath.maxDataSizeMB = 40000

 

 

 

Labels (2)
0 Karma

isoutamo
SplunkTrust
SplunkTrust

When you are using only volumes and defined also internal indexes to use those, then splunk shouldn't stop indexing as it start to frozen buckets when volume size will be full. Of course this means that you must define those volume sizes correctly (easiest way to check size on linux df -BM /splunk/volume/xyz). Then you must add some additional free space to volume definition as there are situations when data comes in and splunk haven't start to frozen enough buckets to make enough free space on indexers.

There is also known bug on 8.1.3+ where this didn't work as earlier. In those cases you must increase that free space more than in previous versions.

What kind of errors you found from internal logs which shows that frozening is not working?

r. Ismo

0 Karma
Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...