Deployment Architecture

Moving warm buckets to cold Buckets manually

johnlzy0408
Loves-to-Learn Everything

My indexer is totally full now and new items cannot be index. 

The previous settings also seems to be not working. 

[root@splunk-masternode local]# cat indexes.conf
homePath.maxDataSizeMB = 80000

 

# Hot and Cold - External data sources
[volume:secondary]
path = /splunk/splunkdata
maxVolumeDataSizeMB = 1650000

 

I have tune the maxDataSizeMB to 40000 instead and the maxVolumeDataSizeMB to 155000 instead and restarted but its not clearing off. 

 

/dev/mapper/splunk_hotbucket-hotbucket 1.8T 1.7T 4.9G 100% /splunk/splunkdata

The 1.65T limit also seems to be not working as its now 1.7T. 

 

Anybody have any advise? This is currently my 2 indexes.conf settings. 

[root@splunk-masternode local]# cat indexes.conf
# VOLUME SETTINGS
# In this example, the volume spec here is set to the indexer-specific
# path for data storage. It satisfies the "volume:primary" tag used in
# the indexes.conf which is shared between SH and indexers.
# See also: org_all_indexes


# One Volume for Hot and Cold - Splunk default internal indexes
[volume:primary]
path = /splunk/splunkdata_internal
# Note: The *only* reason to use a volume is to set a cumulative size-based
# limit across several indexes stored on the same partition. There are *not*
# time-based volume limits.
# ~5 TB
maxVolumeDataSizeMB = 5120


# Hot and Cold - External data sources
[volume:secondary]
path = /splunk/splunkdata
maxVolumeDataSizeMB = 1550000


[volume:cold]
path = /splunk/splunkdata_cold


#[volume:frozen]
#path = /splunk/splunkdata_frozen


# This setting changes the storage location for _splunk_summaries,
# which should be utilized if you want to use the same partition
# as specified for volume settings. Otherwise defaults to $SPLUNK_DB.
#
# The size setting of the volume shown below would place a limit on the
# total size of data model acceleration (DMA) data. Doing so should be
# carefully considered as it may have a negative impact on appilcations
# like Enterprise Security.
#
[volume:_splunk_summaries]
path = /splunk/splunkdata
# ~ 100GB
# maxVolumeDataSizeMB = 100000

 

homePath.maxDataSizeMB = 40000

 

 

 

Labels (2)
0 Karma

isoutamo
SplunkTrust
SplunkTrust

When you are using only volumes and defined also internal indexes to use those, then splunk shouldn't stop indexing as it start to frozen buckets when volume size will be full. Of course this means that you must define those volume sizes correctly (easiest way to check size on linux df -BM /splunk/volume/xyz). Then you must add some additional free space to volume definition as there are situations when data comes in and splunk haven't start to frozen enough buckets to make enough free space on indexers.

There is also known bug on 8.1.3+ where this didn't work as earlier. In those cases you must increase that free space more than in previous versions.

What kind of errors you found from internal logs which shows that frozening is not working?

r. Ismo

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Take Action Automatically on Splunk Alerts with Red Hat Ansible Automation Platform

 Are you ready to revolutionize your IT operations? As digital transformation accelerates, the demand for ...

Calling All Security Pros: Ready to Race Through Boston?

Hey Splunkers, .conf25 is heading to Boston and we’re kicking things off with something bold, competitive, and ...

Beyond Detection: How Splunk and Cisco Integrated Security Platforms Transform ...

Financial services organizations face an impossible equation: maintain 99.9% uptime for mission-critical ...