Getting Data In

BucketMover Errors

rmorlen
Splunk Employee
Splunk Employee

We are seeing about 100,000 events per hour with:

0600 ERROR BucketMover - aborting move because recursive copy from src='/splunkidx/defaultdb/db/db_1384235999_1384149600_72640' to dst='/splunkidx/defaultdb/colddb/inflight-db_1384235999_1384149600_72640' failed (reason='Too many links').

I realize the "Too many links" message is a Linux issue. Not sure why it is happening but more importantly how to I stop the error messages?

Tags (1)
0 Karma

rmorlen
Splunk Employee
Splunk Employee

Buckets were rolling too often from warm to cold. We were trying to get too fancy on out settings. Only needed FrozenTimePeriodInSecs = 259200, not maxWarmDBCount, not MaxHotSpacSecs.

0 Karma

rmorlen
Splunk Employee
Splunk Employee

What I have found is that there is a Linux limit of 32,000 files in a directory and my colddb directory has hit that limit although we only keep 30 days of data.

Is there a way to manually cause a roll from cold to frozen?

Or increase the size of the warm bucket so that it is rolling to cold so often?

Settings from indexes.conf:

[main]
homePath = /splunkidx/defaultdb/db
coldPath = /splunkidx/defaultdb/colddb
thawedPath = /splunkidx/defaultdb/thaweddb
maxDataSize = auto_high_volume
maxTotalDataSizeMB = 400000
maxHotSpanSecs = 86400
frozenTimePeriodInSecs = 2592000
maxWarmDBCount = 30

0 Karma
Get Updates on the Splunk Community!

Application management with Targeted Application Install for Victoria Experience

  Experience a new era of flexibility in managing your Splunk Cloud Platform apps! With Targeted Application ...

Index This | What goes up and never comes down?

January 2026 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Splunkers, Pack Your Bags: Why Cisco Live EMEA is Your Next Big Destination

The Power of Two: Splunk + Cisco at "Ludicrous Scale"   You know Splunk. You know Cisco. But have you seen ...