Splunk Enterprise

How can I tell if Splunk is removing buckets?

splunkuser101
New Member

I am monitoring the disk usage of my splunk data volume as most do. The volume is 100G. Splunk is configured to use up to 92G of that as per the maxVolumeDataSizeMB variable for [volume:primary]. All my indexes use [volume:primary] as their base location and therefore Splunk shoud start purging this data when the sum of all indexes hits this threshold. According to the documentation:

> When a volume containing warm buckets
> reaches its maxVolumeDataSizeMB, it
> starts rolling buckets to cold. When a
> volume containing cold buckets reaches
> its maxVolumeDataSizeMB, it starts
> rolling buckets to frozen. If a volume
> contains both warm and cold buckets
> (which will happen if an index's
> homePath and coldPath are both set to
> the same volume), the oldest bucket
> will be rolled to frozen.

(http://docs.splunk.com/Documentation/Splunk/latest/Indexer/Configureindexstoragesize)

My question is: my current disk usage is sitting at 82G which is 10G less than what I have set Splunk to use up to. However, my monitoring is showing "sawtooth" type trend which to me would suggest that splunk is consuming data (and therefore using space), hitting a threshold and rolling older data from cold to frozen (which is basically a delete) and thus freeing space, and the cycle continues, hence the "sawtooth" type graph I am seeing.

However, the graph I have reflects the server's df -h command output which are both showing a disk usage of 10G less than maxVolumeDataSizeMB. Is there a way that I could test or confirm if indeed data is being purged?

Tags (1)
0 Karma

martin_mueller
SplunkTrust
SplunkTrust

You could search the _internal index for component=BucketMover events.

0 Karma
.conf21 Now Fully Virtual!
Register for FREE Today!

We've made .conf21 totally virtual and totally FREE! Our completely online experience will run from 10/19 through 10/20 with some additional events, too!