Getting Data In

Problem with data retention policy

Path Finder


I have 2 IDX and one CM which is acting as a deployment server and License master as well, and 2 SH in cluster.

I did the data retention for 180 days period. That means, whatever is older than 180 days should move to NAS location by using coldToFrozenScript. But data is not moving properly to NAS and not archiving. Latest event is showing before 8 months. i.e on july, 2018.

Indexes.conf :

coldPath = $SPLUNK_DB\windows_server_security\colddb
enableDataIntegrityControl = 0
enableTsidxReduction = 0
homePath = $SPLUNK_DB\windows_server_security\db
maxTotalDataSizeMB = 512000
thawedPath = $SPLUNK_DB\windows_server_security\thaweddb
repFactor = auto
maxWarmDBCount = 150
frozenTimePeriodInSecs = 15552000
rotatePeriodInSecs = 60
coldToFrozenScript = "$SPLUNK_HOME/bin/python" "$SPLUNK_HOME/bin/" "$DIR"

I did attached images from CM dashboard and indexes.conf what I mentioned.

Can anyone help me with that? Thanks in advance. any help would be appreciated.

alt text

0 Karma


The buckets will age out once the latest event in the buckets is older than frozenTimePeriodInSecs, that said you will always have events that are actually older than frozenTimePeriodInSecs.

cheers, MuS

0 Karma

Ultra Champion

@satyaallaparthi - where is this nice view? - I can't find it ; - )

It seems that coldToFrozenScript = "$SPLUNK_HOME/bin/python" "$SPLUNK_HOME/bin/" "$DIR" fails.

0 Karma

Path Finder


C:\Program Files\Splunk\bin>splunk cmd python
usage: python Error

0 Karma

Path Finder

Hello ddrillic,

below one is the code that, i a using for coldToFrozenScript. Please help me with the problem that I have.

# This is an example script for archiving cold buckets. It must be modified
# to suit your individual needs, and we highly recommend testing this on a
# non-production instance before deploying it.

import sys, os, gzip, shutil, subprocess, random, datetime


# For new style buckets (v4.2+), we can remove all files except for the rawdata.
# We can later rebuild all metadata and tsidx files with "splunk rebuild"
def handleNewBucket(base, files):
    print 'Archiving bucket: ' + base
    for f in files:
        full = os.path.join(base, f)
        if os.path.isfile(full):

# For buckets created before 4.2, simply gzip the tsidx files
# To thaw these buckets, be sure to first unzip the tsidx files
def handleOldBucket(base, files):
    print 'Archiving old-style bucket: ' + base
    for f in files:
        full = os.path.join(base, f)
        if os.path.isfile(full) and (f.endswith('.tsidx') or f.endswith('.data')):
            fin = open(full, 'rb')
            fout = + '.gz', 'wb')

# This function is not called, but serves as an example of how to do
# the previous "flatfile" style export. This method is still not
# recommended as it is resource intensive
def handleOldFlatfileExport(base, files):
    command = ['exporttool', base, os.path.join(base, 'index.export'), 'meta::all']
    retcode =
    if retcode != 0:
        sys.exit('exporttool failed with return code: ' + str(retcode))

    for f in files:
        full = os.path.join(base, f)
        if os.path.isfile(full):
        elif os.path.isdir(full):
            print 'Warning: found irregular bucket file: ' + full

if __name__ == "__main__":
    if len(sys.argv) != 2:
        sys.exit('usage: python <bucket_dir_to_archive>')

    if not os.path.isdir(ARCHIVE_DIR):
        except OSError:
            # Ignore already exists errors, another concurrent invokation may have already created this dir
            sys.stderr.write("mkdir warning: Directory '" + ARCHIVE_DIR + "' already exists\n")

    bucket = sys.argv[1]
    if not os.path.isdir(bucket):
        sys.exit('Given bucket is not a valid directory: ' + bucket)

    rawdatadir = os.path.join(bucket, 'rawdata')
    if not os.path.isdir(rawdatadir):
        sys.exit('No rawdata directory, given bucket is likely invalid: ' + bucket)

    files = os.listdir(bucket)
    journal = os.path.join(rawdatadir, 'journal.gz')
    if os.path.isfile(journal):
        handleNewBucket(bucket, files)
        handleOldBucket(bucket, files)

    if bucket.endswith('/'):
        bucket = bucket[:-1]

    indexname = os.path.basename(os.path.dirname(os.path.dirname(bucket)))
    ##destdir = os.path.join(ARCHIVE_DIR, indexname, os.path.basename(bucket))
    destdir = os.path.join(ARCHIVE_DIR, indexname +'%Y-%m-%d_%H-%M-%S'))

    while os.path.isdir(destdir):
        print 'Warning: This bucket already exists in the archive directory'
        print 'Adding a random extension to this directory...'
        destdir += '.' + str(random.randrange(10))

    ##shutil.copytree(bucket, destdir)
    shutil.make_archive(destdir,'zip', bucket)
0 Karma
Don’t Miss Global Splunk
User Groups Week!

Free LIVE events worldwide 2/8-2/12
Connect, learn, and collect rad prizes
and swag!