The method is which I can use (hot, warm db) for 1 year and I move to the cold db from this or greater?
Description: If collected data become 1 year, data is movement possible with the cold db?
I am trying to save collected data on the local disk(hot,warm) for 1 year.
And I am trying to move to the external storage unit from data which it become with 1year.
Moreover can I solve with a warmToColdScript?
This script is necessary if it is possible.
Is there the possible way?
Thank you
Splunk divides an index into hot, warm and cold. All three of these are online and active. You could have a strategy where the hot and warm data are kept on a fast disk, and the cold data is kept on a slower disk, but still online. Here is a sample:
indexes.conf
[myindex]
homePath = FASTDISK:\splunk\myindex\db
coldPath = SLOWDISK:\splunk\myindex\colddb
thawedPath = SLOWDISK:\splunk\myindex\thaweddb
coldToFrozenDir = ARCHIVEDISK:\splunk\myindex\frozen
maxTotalDataSizeMB = 1000000
frozenTimePeriodInSecs = 31536000
Sets the maximum size of the index to 1TB. Keeps the data online (hot, warm, cold) for 1 year - as long as the overall index size does not exceeed 1 TB. Writes the data to the frozen directory when it is removed from the cold directory. By default, will keep the most recent 225 GB of data on the fast disk in hot or warm. The directory names are in Windows syntax - just substitute a valid Linux directory names for Linux.
If this does not answer your question, please be more specific and give more details. Thanks!
While I agree with this, do you happen to have a DOCS link with guidance to "You could have a strategy where the hot and warm data are kept on a fast disk, and the cold data is kept on a slower disk"?
Thought following may be relevent to mention here, which I got from splunk wiki. You can have global settings before each index and then define individually for each index.
# global settings
# Inheritable by all indexes: no hot/warm bucket can exceed 1 TB.
# Individual indexes can override this setting.
homePath.maxDataSizeMB = 1000000
# volumes
[volume:caliente]
path = /mnt/fast_disk
maxVolumeDataSizeMB = 100000
[volume:frio]
path = /mnt/big_disk
maxVolumeDataSizeMB = 1000000
# indexes
[i1]
homePath = volume:caliente/i1
# homePath.maxDataSizeMB is inherited from the global setting
coldPath = volume:frio/i1
# coldPath.maxDataSizeMB not specified anywhere:
# This results in no size limit - old-style behavior
AFAIK, there is no way to explicitly set a time limit for hot/warm. The size of hot/warm can be set based on number of buckets or disk space. But if you know approximately how much data you are indexing per day, you can estimate the number of buckets:
Indexing 2 Gb per day / 750 MB buckets * 365 days per year = approx 973 buckets
So, add this to the indexes.conf settings
maxWarmDBCount = 973
If you leave off the line
coldToFrozenDir = ARCHIVEDISK:\splunk\myindex\frozen
then Splunk will delete the data once it has exceeded the retention time OR the index size limit is reached.
Also, if the total time is 1 year in hot/warm and 1 year in cold, then you should set the retention time to 2 years -
frozenTimePeriodInSecs = 63072000
Thanks for giving the fast answer.
Fist, I try not to use a frozen as data storage problem.
For example, a sample.log collects data on November 1, 2012.
A sample.log will be good before November in 2013 if it was in the hot or warm DB.
And it will be good if a sample.log moved to the cold DB at November 1st in 2013 aft
er.
I move a warm to cold DB for 1 year after 1 year after data are collected.
Is it possible?
Thank you