Hi Splunkers,
I’m seeing a “Percentage of small buckets is high” health warning on one of my indexers.
The alert shows:
60% small buckets in the last hour
7 out of 9 buckets were small
Affected index: _internal (and possibly others)
I’ve gone through several community posts, but the guidance and reasons vary across threads. I want to understand this issue clearly for my case and confirm the correct next steps.
Indexer restart
A restart can force Splunk to roll hot buckets early and create small buckets.
Timestamp issues or mixed-time data
Bad timestamps or ingesting old + new data at the same time can cause early rolling.
Index settings modified
If maxDataSize, maxHotBuckets, or other defaults were overridden.
Props.conf overwritten
Especially if /system/default or local props affecting _internal were modified.
Sourcetypes in the _internal index
Checked with:
index=_internal | stats count by sourcetype
Reingesting old data
Might trigger early bucket rolls.
Is this mostly a sign of server restart / rolling behavior, or should I suspect timestamp issues even for _internal?
Does a simple restart sometimes clear the health status?
For _internal, what are the default index settings that should be verified?
What is the best way to confirm if props.conf was accidentally overridden for internal logs?
What are the most reliable steps to diagnose repeated small bucket creation on an indexer?
I also reviewed some older community posts where this behavior was mentioned as a possible Splunk bug (from around 2015).
I’m looking for a clear and consolidated explanation and recommended steps specific to a case where _internal is involved and the small bucket percentage is abnormally high.
Timestamps issues are uncommon in _internal.
Splunkd.log will show indexer restarts.
Restarting does not clear the health status. That usually takes 24 hours.
Use btool to view the default index settings.
$SPLUNK_HOME/bin/splunk btool indexes _internal | grep "system\/default"Btool also can show overrides of default settings.