- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Why is /opt/splunk/var/run/splunk/cluster/search-buckets filling up my disk?
Splunk 6.6.3, clustered env. One of our indexers reporting high disk usage. Traced it down to /opt/splunk/var/run/splunk/cluster/search-buckets
containing many search_sitedefault_gen*.csv.gz
and summarize_sitedefault_gen*.csv.gz
files going back to 22 days ago (December 12 at this time). I deleted older ones to stop triggering our disk use alerts.
Whats creating these files and why?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
What is the purpose of the file?
And do you know if there is a cycle or setting method to delete automatically?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

This was a combination of two bugs that were fixed in later versions of splunk (7.0.8+, 7.1.6+, 7.2.4+)
For a workaround, its safe to
- delete older generation files, keeping the last 10 or so per site
- don't delete the gen0 file
for example, if i have:
search_sitedefault_gen1000.csv.gz as the latest file, i can delete search_sitedefault_gen(1-990).csv.gz safely
but remember this is per site, so if i have the latest:
search_site0_gen1000.csv.gz (delete gen1-990 for site0, dont delete gen0)
search_site1_gen3500.csv.gz (delete gen1-3490 for site1, dont delete gen0)
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Anyone else facing same issues in 8.2.4. Will check with support and see.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi dxu,
Is there a workaround for the same?
Thanks,
Santhosh
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
FWIW and I know it's not ideal but a rolling restart of the cluster peers will clear these down. I'm on 7.1.5.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you stepheneardley.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

You have a lot of traffic for your deployment. Increase disk space.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

This is NOT a helpful answer and does not explain why there are so many of these files in this directory path. There apparently is no documentation from Splunk on this. I am opening a case as I suggest everyone else having this does the same.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We had recently a similar but different path issue at Why does /opt/splunk/var/run/searchpeers fill up?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@ddrillic thanks for responding but not related. I need to know what is creating the above files in /opt/splunk/var/run/splunk/cluster/search-buckets. I just had to delete files from all of my indexers to have available space. Never had to do this before our upgrade to 6.6.3.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, can anyone provide input as to what is creating search_sitedefault_gen*.csv.gz
and summarize_sitedefault_gen*.csv.gz
files in /opt/splunk/var/run/splunk/cluster/search-buckets
?
Thanks
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Same issue here on v6.6.5. Did you ever find anything out?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We're now seeing additional indexers having disk usage issues from the above, can anyone shed any light?
