Deployment Architecture

High Percentage of Small Buckets on Indexer – Mostly on _internal Index

sanjai
Path Finder

Hi Splunkers,

I’m seeing a “Percentage of small buckets is high” health warning on one of my indexers.
The alert shows:

  • 60% small buckets in the last hour

  • 7 out of 9 buckets were small

  • Affected index: _internal (and possibly others)

I’ve gone through several community posts, but the guidance and reasons vary across threads. I want to understand this issue clearly for my case and confirm the correct next steps.

My Findings So Far (Possible Causes)

  1. Indexer restart
    A restart can force Splunk to roll hot buckets early and create small buckets.

  2. Timestamp issues or mixed-time data
    Bad timestamps or ingesting old + new data at the same time can cause early rolling.

  3. Index settings modified
    If maxDataSize, maxHotBuckets, or other defaults were overridden.

  4. Props.conf overwritten
    Especially if /system/default or local props affecting _internal were modified.

  5. Sourcetypes in the _internal index
    Checked with:
    index=_internal | stats count by sourcetype

  6. Reingesting old data
    Might trigger early bucket rolls.

 

What I Want to Confirm

  1. Is this mostly a sign of server restart / rolling behavior, or should I suspect timestamp issues even for _internal?

  2. Does a simple restart sometimes clear the health status?

  3. For _internal, what are the default index settings that should be verified?

  4. What is the best way to confirm if props.conf was accidentally overridden for internal logs?

  5. What are the most reliable steps to diagnose repeated small bucket creation on an indexer?

I also reviewed some older community posts where this behavior was mentioned as a possible Splunk bug (from around 2015).

I’m looking for a clear and consolidated explanation and recommended steps specific to a case where _internal is involved and the small bucket percentage is abnormally high.

Labels (2)
0 Karma

richgalloway
SplunkTrust
SplunkTrust

Timestamps issues are uncommon in _internal.

Splunkd.log will show indexer restarts.

Restarting does not clear the health status.  That usually takes 24 hours.

Use btool to view the default index settings.

$SPLUNK_HOME/bin/splunk btool indexes _internal | grep "system\/default"

Btool also can show overrides of default settings.

 

---
If this reply helps you, Karma would be appreciated.
Get Updates on the Splunk Community!

Index This | Why did the turkey cross the road?

November 2025 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Enter the Agentic Era with Splunk AI Assistant for SPL 1.4

  🚀 Your data just got a serious AI upgrade — are you ready? Say hello to the Agentic Era with the ...

Feel the Splunk Love: Real Stories from Real Customers

Hello Splunk Community,    What’s the best part of hearing how our customers use Splunk? Easy: the positive ...