Getting Data In

Problem with data retention policy

satyaallaparthi
Communicator

Hello,

I have 2 IDX and one CM which is acting as a deployment server and License master as well, and 2 SH in cluster.

I did the data retention for 180 days period. That means, whatever is older than 180 days should move to NAS location by using coldToFrozenScript. But data is not moving properly to NAS and not archiving. Latest event is showing before 8 months. i.e on july, 2018.

Indexes.conf :

[windows_server_security]
coldPath = $SPLUNK_DB\windows_server_security\colddb
enableDataIntegrityControl = 0
enableTsidxReduction = 0
homePath = $SPLUNK_DB\windows_server_security\db
maxTotalDataSizeMB = 512000
thawedPath = $SPLUNK_DB\windows_server_security\thaweddb
repFactor = auto
maxWarmDBCount = 150
frozenTimePeriodInSecs = 15552000
rotatePeriodInSecs = 60
coldToFrozenScript = "$SPLUNK_HOME/bin/python" "$SPLUNK_HOME/bin/coldToFrozenExample.py" "$DIR"

I did attached images from CM dashboard and indexes.conf what I mentioned.

Can anyone help me with that? Thanks in advance. any help would be appreciated.

alt text

0 Karma

MuS
SplunkTrust
SplunkTrust

The buckets will age out once the latest event in the buckets is older than frozenTimePeriodInSecs, that said you will always have events that are actually older than frozenTimePeriodInSecs.

cheers, MuS

0 Karma

ddrillic
Ultra Champion

@satyaallaparthi - where is this nice view? - I can't find it ; - )

It seems that coldToFrozenScript = "$SPLUNK_HOME/bin/python" "$SPLUNK_HOME/bin/coldToFrozenExample.py" "$DIR" fails.

0 Karma

satyaallaparthi
Communicator

getting

C:\Program Files\Splunk\bin>splunk cmd python
bmw_coldToFrozenExample.py
usage: python bmw_coldToFrozenExample.py Error

0 Karma

satyaallaparthi
Communicator

Hello ddrillic,

below one is the code that, i a using for coldToFrozenScript. Please help me with the problem that I have.

# This is an example script for archiving cold buckets. It must be modified
# to suit your individual needs, and we highly recommend testing this on a
# non-production instance before deploying it.

import sys, os, gzip, shutil, subprocess, random, datetime

### CHANGE THIS TO YOUR ACTUAL ARCHIVE DIRECTORY!!!
ARCHIVE_DIR = '\\\\nv1001.net\\LOGS_PROD'



# For new style buckets (v4.2+), we can remove all files except for the rawdata.
# We can later rebuild all metadata and tsidx files with "splunk rebuild"
def handleNewBucket(base, files):
    print 'Archiving bucket: ' + base
    for f in files:
        full = os.path.join(base, f)
        if os.path.isfile(full):
            os.remove(full)

# For buckets created before 4.2, simply gzip the tsidx files
# To thaw these buckets, be sure to first unzip the tsidx files
def handleOldBucket(base, files):
    print 'Archiving old-style bucket: ' + base
    for f in files:
        full = os.path.join(base, f)
        if os.path.isfile(full) and (f.endswith('.tsidx') or f.endswith('.data')):
            fin = open(full, 'rb')
            fout = gzip.open(full + '.gz', 'wb')
            fout.writelines(fin)
            fout.close()
            fin.close()
            os.remove(full)

# This function is not called, but serves as an example of how to do
# the previous "flatfile" style export. This method is still not
# recommended as it is resource intensive
def handleOldFlatfileExport(base, files):
    command = ['exporttool', base, os.path.join(base, 'index.export'), 'meta::all']
    retcode = subprocess.call(command)
    if retcode != 0:
        sys.exit('exporttool failed with return code: ' + str(retcode))

    for f in files:
        full = os.path.join(base, f)
        if os.path.isfile(full):
            os.remove(full)
        elif os.path.isdir(full):
            shutil.rmtree(full)
        else:
            print 'Warning: found irregular bucket file: ' + full

if __name__ == "__main__":
    if len(sys.argv) != 2:
        sys.exit('usage: python bmw_coldToFrozenExample.py <bucket_dir_to_archive>')

    if not os.path.isdir(ARCHIVE_DIR):
        try:
            os.mkdir(ARCHIVE_DIR)
        except OSError:
            # Ignore already exists errors, another concurrent invokation may have already created this dir
            sys.stderr.write("mkdir warning: Directory '" + ARCHIVE_DIR + "' already exists\n")

    bucket = sys.argv[1]
    if not os.path.isdir(bucket):
        sys.exit('Given bucket is not a valid directory: ' + bucket)

    rawdatadir = os.path.join(bucket, 'rawdata')
    if not os.path.isdir(rawdatadir):
        sys.exit('No rawdata directory, given bucket is likely invalid: ' + bucket)

    files = os.listdir(bucket)
    journal = os.path.join(rawdatadir, 'journal.gz')
    if os.path.isfile(journal):
        handleNewBucket(bucket, files)
    else:
        handleOldBucket(bucket, files)

    if bucket.endswith('/'):
        bucket = bucket[:-1]

    indexname = os.path.basename(os.path.dirname(os.path.dirname(bucket)))
    ##destdir = os.path.join(ARCHIVE_DIR, indexname, os.path.basename(bucket))
    destdir = os.path.join(ARCHIVE_DIR, indexname + datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S'))


    while os.path.isdir(destdir):
        print 'Warning: This bucket already exists in the archive directory'
        print 'Adding a random extension to this directory...'
        destdir += '.' + str(random.randrange(10))

    ##shutil.copytree(bucket, destdir)
    shutil.make_archive(destdir,'zip', bucket)
0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...