In a 4 indexer cluster, where there are 60 individual indexes I happen to have 40.000+ buckets now (data is back from years). I assume this could cause some performance issues. Can you confirm this?
If yes, is there a way to somehow optimize the count of buckets? Merge buckets in the same index, eg: 10 buckets will become 1.
Using Splunk 6.1.4.
1000 buckets per index doesn't sound horrible to me. Are you actually seeing performance issues or are you just worried about the numbers?
This should not cause performance problems. It's possible that if you have tens of thousands of buckets in an individual directory, some older filesystems (e.g., ext3) may start hitting performance limits, but you're right now at an average of only 40000/(4x6) = 167 per directory, or even less if they are spread to a cold directory. Newer filesystems (e.g., ext4, XFS, NTFS) also avoid this.
You can't easily merge buckets, but you should make sure going forward that your indexes are set to a maxDataSize (max bucket size) of at least
auto_high_volume (10 GB), and not
auto (750 MB) to make sure they are not unnecessarily small.
Thanks to both of you, currently I'm not seeing any performance impact - I'm just trying to be proactive here. maxDataSize is indeed set to autohighvolume, anyway thanks for the tip!