Getting Data In

Total space for index

skippylou
Communicator

I see alot in the docs, etc. that show how to set limits on buckets, etc. I can't seem to find out if there is a way to limit size on an index and have old stuff deleted out when room is needed for new stuff - a fifo approach.

Basically want to ensure that any new logs coming in aren't blocked on me clearing space or archiving, etc.

Thoughts?

Thanks,

Scott

1 Solution

ftk
Motivator

You can set maxTotalDataSizeMB on a per index basis in indexes.conf.

maxTotalDataSizeMB =

  • The maximum size of an index (in MB).
  • If an index grows larger, the oldest data is frozen.
  • Defaults to 500000.

Once the data is moved to frozen, by default it is deleted: http://www.splunk.com/base/Documentation/latest/Admin/HowSplunkstoresindexes

After changing indexes.conf you will have to restart your Splunk instance.

View solution in original post

ftk
Motivator

You can set maxTotalDataSizeMB on a per index basis in indexes.conf.

maxTotalDataSizeMB =

  • The maximum size of an index (in MB).
  • If an index grows larger, the oldest data is frozen.
  • Defaults to 500000.

Once the data is moved to frozen, by default it is deleted: http://www.splunk.com/base/Documentation/latest/Admin/HowSplunkstoresindexes

After changing indexes.conf you will have to restart your Splunk instance.

Lowell
Super Champion

Be sure to check out this resource as well: http://www.splunk.com/wiki/Deploy:UnderstandingBuckets

0 Karma

Lowell
Super Champion

You are correct, a whole bucket is frozen (archived/deleted) at once. The 10Gb default is for 64bit systems, it's 700Mb for 32 bit systems. So I think its safe to say that anything in the middle should be safe. The issue is less about the size of your buckets, but how many buckets you will end up with based on that size. A hundred or two shouldn't be a problem, but 10,000 buckets will be. Having buckets with a smaller time span could improve performance if your searches are generally over small time ranges.... so, yeah, it's complicated.

0 Karma

skippylou
Communicator

Thanks, after re-reading that again it makes more sense now. Just to clarify, when it deletes it has to delete a whole bucket it seems - which defaults to 10GB based on maxDataSize for buckets. Is there any performance penalty to drop that lower that people have seen?

0 Karma
Get Updates on the Splunk Community!

Application management with Targeted Application Install for Victoria Experience

  Experience a new era of flexibility in managing your Splunk Cloud Platform apps! With Targeted Application ...

Index This | What goes up and never comes down?

January 2026 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Splunkers, Pack Your Bags: Why Cisco Live EMEA is Your Next Big Destination

The Power of Two: Splunk + Cisco at "Ludicrous Scale"   You know Splunk. You know Cisco. But have you seen ...