Splunk Enterprise

Indexes and Circular Buffers

Drainy
Champion

Hi,

I am currently trying to get my head around a problem.
Data is being read in from an external source, I cannot control the size of the data coming in but I only want to keep the most recent 20-30 meg of data received.
To do this I need to to keep deleting or overwriting the oldest data.

At first I thought I could implement this quite simply by reducing the size of its index but that just resulted in the index having all events flushed when it reached its limit and reset back to 0.
Is there a simple way to implement this in Splunk or does anyone have any experience of implementing this some other way?

Tags (2)
1 Solution

jbsplunk
Splunk Employee
Splunk Employee

You probably don't have enough buckets configured for your index. By default, splunk stores data in 750MB for a 32 bit system. You can control the size for a given bucket with the MaxDataSize setting in indexes.conf. You can also control the number of hot buckets with the maxHotBuckets setting in the same file. Data rolls from hot to warm to cold, and there are other various settings which can be used to manipulate retention.

You probably should read up on how indexed data is stored and purged within Splunk, but what you are asking about here is completely possible.

http://www.splunk.com/base/Documentation/latest/admin/HowSplunkstoresindexes

View solution in original post

0 Karma

jbsplunk
Splunk Employee
Splunk Employee

You probably don't have enough buckets configured for your index. By default, splunk stores data in 750MB for a 32 bit system. You can control the size for a given bucket with the MaxDataSize setting in indexes.conf. You can also control the number of hot buckets with the maxHotBuckets setting in the same file. Data rolls from hot to warm to cold, and there are other various settings which can be used to manipulate retention.

You probably should read up on how indexed data is stored and purged within Splunk, but what you are asking about here is completely possible.

http://www.splunk.com/base/Documentation/latest/admin/HowSplunkstoresindexes

0 Karma

Drainy
Champion

Thanks for this. Understanding the buckets was definitely something I should've tried first. Its functioning perfectly now. For reference to anyone who finds this, I set the size of all buckets (including the index overall size) to the same level, around 10 meg to test. This meant that each time it tried to generate a new bucket it wouldn't have enough space to create one and then seemed to just flush all events to make space for more. All sorted now.

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Index This | What travels the world but is also stuck in place?

April 2026 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Discover New Use Cases: Unlock Greater Value from Your Existing Splunk Data

Realizing the full potential of your Splunk investment requires more than just understanding current usage; it ...

Continue Your Journey: Join Session 2 of the Data Management and Federation Bootcamp ...

As data volumes continue to grow and environments become more distributed, managing and optimizing data ...