Deployment Architecture

Why do I keep getting "free disk space reached" warnings with a smartstore IDX cluster?

Glasses
Builder

Hi - 

I have a 4 idx cluster with smartstore.   I keep seeing these warnings on all 4 idx members >>> 

Search peer <splunk-idx-1> has the following message: The minimum free disk space (512000MB) reached for /opt/splunk/var/run/splunk/dispatch.  3/11/2021, 1:30:00 AM

No matter what I do to make extra room, I keep getting the warnings.

On each idx the server.conf is configured locally (/opt/splunk/etc/system/local) with 

[cachemanager]
eviction_policy = lru
#eviction_padding = 5120
eviction_padding = 10240 <<<< doubled
max_cache_size = 0
hotlist_recency_secs = 86400
hotlist_bloom_filter_recency_hours = 360
evict_on_stable = false

# disk usage processor settings
[diskUsage]
#minFreeSpace = 5000
minFreeSpace = 512000 <<<< 500GB
pollingFrequency = 100000
pollingTimerFrequency = 10

The warning troubles me because if the cache_manager is evicting properly I should not be seeing this warning, or am I mistaken?

I don't see a lot of misses in the MC under SmartStore Cache Performance, I see some repeated downloads, but no excessive downloads.

Should I set the max_cache_size instead of the minFreeSpace setting?

Per Splunk docs >>>

Set limits on disk usage
Note: This topic is not relevant to SmartStore indexes. See Initiate eviction based on occupancy of the cache's disk partition for information on how SmartStore controls local disk usage.

Per Splunk docs >>>

Disk full issues
A disk full related message indicates that the cache manager is unable to evict sufficient buckets. These are some possible causes:

Search load overwhelming local storage. For example, the entire cache might be consumed by buckets opened by at least one search process. When the search ends, this problem should go away.  

**** this is not the case because even when search activity has ended, the warnings persist


Cache manager issues. If the problem persists beyond a search, the cause could be related to the cache manager. Examine splunkd.log on the indexer issuing the error.

***** I am seeing some >>> Cache was full and space could not be reserved, warnings

but I don't know how to fix this.

 

Any advice is greatly appreciated.

Thank you

Labels (1)
0 Karma
1 Solution

Glasses
Builder

Apparently the eviction-padding is the relevant value to increase, not the minimum Free Space.

I increased eviction_padding = 512000 and restored minFreeSpace = 5000 (default) and I don't receive the warnings anymore.
Can anyone confirm that this is the right solution?

View solution in original post

0 Karma

scelikok
SplunkTrust
SplunkTrust

Hi @Glasses,

You are right. Increasing minFreeSpace make Splunk keep complaining if disk space is less them 512000. 

However 512 GB seems too high for eviction_padding. You may try lower value like 100 GB ?

If this reply helps you an upvote and "Accept as Solution" is appreciated.
0 Karma

Glasses
Builder

Thanks for confirming > that minFreeSpace does not fix - disk full or disk reached warnings.  I am setting eviction_padding to 5% of my total disk capacity so that this eviction problem does not arise again.   I could possibly lower it for more cache storage capacity but for now its not a problem.

0 Karma

Glasses
Builder

Apparently the eviction-padding is the relevant value to increase, not the minimum Free Space.

I increased eviction_padding = 512000 and restored minFreeSpace = 5000 (default) and I don't receive the warnings anymore.
Can anyone confirm that this is the right solution?

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...