Getting Data In
Highlighted

Log Retention

Path Finder

Hi,

I have configured following parameters for testing the log Archiving for one of my index named "os". But it is not working as per the configured parameters as the data from hot+warm bukets is rolled to cold but the data has not rolled from cold buket to frozen achive location. Instead of that the cold buket size and number is keep on growing. Please do let me know if the parameters provide by me are worng or how can i test the logs are archiving on the given location.

etc/apps/index.conf

[os]
homePath = $SPLUNK_DB2/os/db
coldPath = $SPLUNK_DB2/os/colddb
thawedPath = $SPLUNK_DB2/os/thaweddb
coldToFrozenDir = $SPLUNK_DB2_Frozen/Archive/os
maxWarmDBCount = 5
maxHotBuckets = 1
maxDataSize = 10
maxTotalDataSizeMB = 1000
frozenTimePeriodInSecs = 1800

Storage bukets as per above parameters:

8.0K    ./db/db_1329748885_1329748067_16/rawdata/.compressedAddresses
664K    ./db/db_1329748885_1329748067_16/rawdata
824K    ./db/db_1329748885_1329748067_16
8.0K    ./db/db_1329749807_1329749597_18/rawdata/.compressedAddresses
200K    ./db/db_1329749807_1329749597_18/rawdata
304K    ./db/db_1329749807_1329749597_18
8.0K    ./db/db_1329751849_1329749838_19/rawdata/.compressedAddresses
1.6M    ./db/db_1329751849_1329749838_19/rawdata
1.8M    ./db/db_1329751849_1329749838_19
8.0K    ./db/db_1329749567_1329748907_17/rawdata/.compressedAddresses
508K    ./db/db_1329749567_1329748907_17/rawdata
648K    ./db/db_1329749567_1329748907_17
8.0K    ./db/GlobalMetaData
8.0K    ./db/hot_v1_21/rawdata/.compressedAddresses
3.3M    ./db/hot_v1_21/rawdata
3.8M    ./db/hot_v1_21
8.0K    ./db/db_1329809830_1329751878_20/rawdata/.compressedAddresses
3.5M    ./db/db_1329809830_1329751878_20/rawdata
7.1M    ./db/db_1329809830_1329751878_20
15M ./db
8.0K    ./colddb/db_1323792035_1323791901_8/rawdata/.compressedAddresses
244K    ./colddb/db_1323792035_1323791901_8/rawdata
356K    ./colddb/db_1323792035_1323791901_8
8.0K    ./colddb/db_1328056408_1327030688_9/rawdata/.compressedAddresses
68M ./colddb/db_1328056408_1327030688_9/rawdata
135M    ./colddb/db_1328056408_1327030688_9
8.0K    ./colddb/db_1329748045_1329747137_15/rawdata/.compressedAddresses
780K    ./colddb/db_1329748045_1329747137_15/rawdata
948K    ./colddb/db_1329748045_1329747137_15
8.0K    ./colddb/db_1329747107_1329746717_14/rawdata/.compressedAddresses
380K    ./colddb/db_1329747107_1329746717_14/rawdata
508K    ./colddb/db_1329747107_1329746717_14
8.0K    ./colddb/db_1329746687_1329745996_13/rawdata/.compressedAddresses
636K    ./colddb/db_1329746687_1329745996_13/rawdata
788K    ./colddb/db_1329746687_1329745996_13
8.0K    ./colddb/db_1297617319_1291336726_0/rawdata/.compressedAddresses
179M    ./colddb/db_1297617319_1291336726_0/rawdata
399M    ./colddb/db_1297617319_1291336726_0
8.0K    ./colddb/db_1329737474_1328056434_10/rawdata/.compressedAddresses
107M    ./colddb/db_1329737474_1328056434_10/rawdata
213M    ./colddb/db_1329737474_1328056434_10
8.0K    ./colddb/db_1312010101_1308010497_3/rawdata/.compressedAddresses
178M    ./colddb/db_1312010101_1308010497_3/rawdata
402M    ./colddb/db_1312010101_1308010497_3
8.0K    ./colddb/db_1303821315_1297617349_1/rawdata/.compressedAddresses
117M    ./colddb/db_1303821315_1297617349_1/rawdata
363M    ./colddb/db_1303821315_1297617349_1
8.0K    ./colddb/db_1322968195_1319178356_6/rawdata/.compressedAddresses
190M    ./colddb/db_1322968195_1319178356_6/rawdata
395M    ./colddb/db_1322968195_1319178356_6
8.0K    ./colddb/db_1308010467_1303821327_2/rawdata/.compressedAddresses
176M    ./colddb/db_1308010467_1303821327_2/rawdata
397M    ./colddb/db_1308010467_1303821327_2
8.0K    ./colddb/db_1323791855_1322968211_7/rawdata/.compressedAddresses
44M ./colddb/db_1323791855_1322968211_7/rawdata
96M ./colddb/db_1323791855_1322968211_7
8    .0K    ./colddb/db_1329745010_1329737477_11/rawdata/.compressedAddresses
5.8M    ./colddb/db_1329745010_1329737477_11/rawdata
6.4M    ./colddb/db_1329745010_1329737477_11
8.0K    ./colddb/db_1315568232_1312010124_4/rawdata/.compressedAddresses
197M    ./colddb/db_1315568232_1312010124_4/rawdata
399M    ./colddb/db_1315568232_1312010124_4
8.0K    ./colddb/db_1329745970_1329745037_12/rawdata/.compressedAddresses
844K    ./colddb/db_1329745970_1329745037_12/rawdata
1020K   ./colddb/db_1329745970_1329745037_12
8.0K    ./colddb/db_1319178335_1315568261_5/rawdata/.compressedAddresses
192M    ./colddb/db_1319178335_1315568261_5/rawdata
400M    ./colddb/db_1319178335_1315568261_5
3.2G    ./colddb
8.0K    ./thaweddb
3.2G    .
3.2G    total
Tags (2)
0 Karma
Highlighted

Re: Log Retention

Motivator

I would check first the index status with the command:

| dbinspect index=os

Then search for any error with for example:

index=_internal "/Archive/os"

or

index=_internal source=*splunkd.log bucketmover OR freeze
Highlighted

Re: Log Retention

Path Finder

The results of queuery :

1 2/21/12 5:55:32.543 AM ashs95236.becpsn.com AsyncFreezer freeze succeeded for /DATA2/splunk/var/lib/splunk/os/colddb/db132296819513191783566
25 2/21/12 5:53:56.453 AM ashs95236.becpsn.com will attempt to freeze: /DATA2/splunk/var/lib/splunk/os/db/db
1329809830132975187820 because frozenTimePeriodInSecs=1800 exceeds difference between now=1329821636 and latest=1329809830

0 Karma
Highlighted

Re: Log Retention

Champion

Looking at your config have you checked that its your settings that are applying the changes and not some defaults?

If you run from the SPLUNK_HOME/BIN directory;

./splunk cmd btool indexes list

it will list all the settings that Splunk has applied from the indexes configs it has read in.

Also when you say your config is called index.conf in the app directory do you mean it is called indexes.conf in a local directory?

The correct name is indexes.conf and it needs to be located in one of a few places;
SPLUNK_HOME/etc/system/local/ <-- to make it a system wide index

SPLUNK_HOME/etc/apps/APPNAME/local/ <-- to use it as more of a specific app related index

Highlighted

Re: Log Retention

Path Finder

I have used the btool command and i is very usefull in bucket rollout verification. thank you for the information.

0 Karma