My customers are getting error below for their searches;
[splunk-idx-1] Streamed search execute failed because: Error in 'S2BucketCache': _openImpl(): received HTTP status code=503 description="Service Unavailable" message="Can not open bucket cacheId="bid|myindex_6_6m~11~D8865C-F354-406A-A114-17D2B309DD88|": filesystem on which path=/opt/splunk/indexes/myindex_6_6m/db/db_1565274656_1565274618_11_D8865C-F354-406A-A114-17D2B309DD88 resides has reached minFreeMB=2000; retry when sufficient space is available" 2 times consecutively for cid="bid|myindex_6_6m~11~D8865C-F354-406A-A114-17D2B309DD88|".
Could you please suggest what we should do?
It wants to say "Cache is full and cache space couldn't be reserved". And the log message is a bit misleading not accurate enough - The error messaging sent to search incorrectly states we've hit the 2GB minfreespace when we've actually just unable to reserve space before download. With version 7.3.x it should be "Cache was full and space could not be reserved"
When the Cache manager is not able to evict buckets check the below;
1) is the disk full ?
2) is max_cache_size too low ?
3) is eviction_padding is too high ?
4) are buckets being uploaded to remote storage? Buckets that haven't been uploaded cannot be evicted.
5) is disk full of hot buckets? Try a restart for rolling to warm.
6) is the cache large enough to support the data volume that is searched.
7) is the data balanced and are the primaries balanced (assuming clustered)
If none of them helps please open a support ticket with the details as well as diags.
It wants to say "Cache is full and cache space couldn't be reserved". And the log message is a bit misleading not accurate enough - The error messaging sent to search incorrectly states we've hit the 2GB minfreespace when we've actually just unable to reserve space before download. With version 7.3.x it should be "Cache was full and space could not be reserved"
When the Cache manager is not able to evict buckets check the below;
1) is the disk full ?
2) is max_cache_size too low ?
3) is eviction_padding is too high ?
4) are buckets being uploaded to remote storage? Buckets that haven't been uploaded cannot be evicted.
5) is disk full of hot buckets? Try a restart for rolling to warm.
6) is the cache large enough to support the data volume that is searched.
7) is the data balanced and are the primaries balanced (assuming clustered)
If none of them helps please open a support ticket with the details as well as diags.