I am using the Splunk archiving feature where events are archived to HDFS after a certain amount of time (23 days in my case) and then removed from the indexer after 26 days.
This is all working but I recently started archiving a new index which seems to have many more buckets and thus we need many more inodes in HDFS (Splunk uses up to 7 inodes per bucket.)
I hesitate to adjust the bucketing that I have for normal Splunk indexing as all is working well there, but wondered if there were any settings I should look at to reduce the number of buckets that this index has.
On the splunk side, you have several indirect controls over bucket count in indexes.conf, some:
maxDataSize limits the bucket size. If your buckets roll because of this and you're at auto, consider autohighvolume
maxHotSpanSecs limits the bucket time range. If you frequently add data that's not timestamped around "now", e.g. backfills, you may see plenty of small 90-day-spanning buckets. Increasing this setting would reduce the number of buckets, but may not be ideal in every scenario
maxHotBuckets limits the number of hot buckets to keep open at any time. If you have three buckets open with current time ranges and add one old event, a current hot bucket will get rolled and a new bucket will be created for that one old event. A while later that old bucket may already be rolled and you add another similarly old event... get a new tiny hot bucket again. Increasing this might reduce this number of old small buckets in such a scenario
Which one of these is your magic bullet depends on why your buckets roll... If they roll due to size alone, changing time range related things won't help.
Making up some numbers, say you put 100GB/day into the index for maybe 50GB/day of space on disk. With the default setting you'd get at least 67 buckets per day, while autohighvolume would cut this to about five buckets per day.