I am trying to control disk space for my test of Splunk. Its not a minFreeSpace issue, because I want Splunk to continue to index.
maxTotalDataSizeMB = 500 (for Main) but it continues to grow. Now at 691MB (see below).
Even though the"current size" reports 691GB…… has it deleted(froze) 191GB's of the oldest data with FIFO?
In other words, will it free up 191MB of disk? How could the Current Size be bigger than the MAX?
Max size (MB) of entire index Current size (in MB) MAIN 500 691 [main] maxTotalDataSizeMB = 500
Hmm, what are your individual bucket sizes set to? I believe by default they are quite large (I believe 750 MB on 32bit, 10GB on 64). If I recall correctly the buckets won't roll to frozen until all data in them is ready to be rolled, and if you have multiple 10 GB buckets it will take a lot more then 0.5 GB to roll them.
There is a good wiki article on buckets: http://www.splunk.com/wiki/Deploy:UnderstandingBuckets
You might want to try cutting down the number of buckets and/or bucket sizes for your index. Default config for index main:
[main] homePath = $SPLUNK_DB\defaultdb\db coldPath = $SPLUNK_DB\defaultdb\colddb thawedPath = $SPLUNK_DB\defaultdb\thaweddb maxMemMB = 20 maxConcurrentOptimizes = 6 maxHotIdleSecs = 86400 maxHotBuckets = 10 maxDataSize = auto_high_volume
According to the docs, maxHotBuckets governs your number of buckets (in this case 10) and maxDataSize the individual bucket size (in this case auto_high_volume = 10GB). So to keep this index at 500 MB and keeping 10 buckets, I would suggest setting maxDataSize to 500/10=50MB. Better yet, split it into 5 100 MB buckets so that you aren't constantly rolling.
Also, raising maxHotBuckets normally will not raise the number of buckets in use. Under normal circumstances, it will simply use one, maybe two hot buckets. New hot buckets are only generated if an event arrives with a timestamp far away from the timestamps of data in the already open hot buckets.