I am currently trying to get my head around a problem.
Data is being read in from an external source, I cannot control the size of the data coming in but I only want to keep the most recent 20-30 meg of data received.
To do this I need to to keep deleting or overwriting the oldest data.
At first I thought I could implement this quite simply by reducing the size of its index but that just resulted in the index having all events flushed when it reached its limit and reset back to 0.
Is there a simple way to implement this in Splunk or does anyone have any experience of implementing this some other way?
You probably don't have enough buckets configured for your index. By default, splunk stores data in 750MB for a 32 bit system. You can control the size for a given bucket with the MaxDataSize setting in indexes.conf. You can also control the number of hot buckets with the maxHotBuckets setting in the same file. Data rolls from hot to warm to cold, and there are other various settings which can be used to manipulate retention.
You probably should read up on how indexed data is stored and purged within Splunk, but what you are asking about here is completely possible.
Thanks for this. Understanding the buckets was definitely something I should've tried first. Its functioning perfectly now. For reference to anyone who finds this, I set the size of all buckets (including the index overall size) to the same level, around 10 meg to test. This meant that each time it tried to generate a new bucket it wouldn't have enough space to create one and then seemed to just flush all events to make space for more. All sorted now.