I am using the Splunk archiving feature where events are archived to HDFS after a certain amount of time (23 days in my case) and then removed from the indexer after 26 days.
This is all working but I recently started archiving a new index which seems to have many more buckets and thus we need many more inodes in HDFS (Splunk uses up to 7 inodes per bucket.)
I hesitate to adjust the bucketing that I have for normal Splunk indexing as all is working well there, but wondered if there were any settings I should look at to reduce the number of buckets that this index has.
On the splunk side, you have several indirect controls over bucket count in indexes.conf, some:
Which one of these is your magic bullet depends on why your buckets roll... If they roll due to size alone, changing time range related things won't help.
Making up some numbers, say you put 100GB/day into the index for maybe 50GB/day of space on disk. With the default setting you'd get at least 67 buckets per day, while auto_high_volume would cut this to about five buckets per day.