I'm looking to get a better understanding of when the cache manager will evict a bucket or at least the journal and tsidx files.
I'm using the default eviction policy (lru) so when the cache becomes full I assume the bucket(s) with the oldest last accessed time are evicted? How does the cache manager determine how many buckets need to be evicted? Does it look at the buckets that need to be downloaded into the cache and determine the amount of space that needs to be evicted in order for those buckets to fit?
I'm also interested under what conditions the cache manager considers a bucket "stale" (it doesn't have a high likelihood of participating in a future search). How long has to pass without the bucket being accessed before it is considered stale?
The docs also state that one of the characteristics common to most Splunk deployments searches is that:
They have spatial and temporal locality. If a search finds an event at a specific time or in a specific log, it's likely that other searches will look for events within a closely similar time range or in that log.
Does this mean if a bucket is needed for a search will other buckets that have their earliest_time & latest_time within a certain time range of this bucket will be less likely to be evicted?