Getting Data In

BucketMover Errors

rmorlen
Splunk Employee
Splunk Employee

We are seeing about 100,000 events per hour with:

0600 ERROR BucketMover - aborting move because recursive copy from src='/splunkidx/defaultdb/db/db_1384235999_1384149600_72640' to dst='/splunkidx/defaultdb/colddb/inflight-db_1384235999_1384149600_72640' failed (reason='Too many links').

I realize the "Too many links" message is a Linux issue. Not sure why it is happening but more importantly how to I stop the error messages?

Tags (1)
0 Karma

rmorlen
Splunk Employee
Splunk Employee

Buckets were rolling too often from warm to cold. We were trying to get too fancy on out settings. Only needed FrozenTimePeriodInSecs = 259200, not maxWarmDBCount, not MaxHotSpacSecs.

0 Karma

rmorlen
Splunk Employee
Splunk Employee

What I have found is that there is a Linux limit of 32,000 files in a directory and my colddb directory has hit that limit although we only keep 30 days of data.

Is there a way to manually cause a roll from cold to frozen?

Or increase the size of the warm bucket so that it is rolling to cold so often?

Settings from indexes.conf:

[main]
homePath = /splunkidx/defaultdb/db
coldPath = /splunkidx/defaultdb/colddb
thawedPath = /splunkidx/defaultdb/thaweddb
maxDataSize = auto_high_volume
maxTotalDataSizeMB = 400000
maxHotSpanSecs = 86400
frozenTimePeriodInSecs = 2592000
maxWarmDBCount = 30

0 Karma
Get Updates on the Splunk Community!

AI for AppInspect

We’re excited to announce two new updates to AppInspect designed to save you time and make the app approval ...

App Platform's 2025 Year in Review: A Year of Innovation, Growth, and Community

As we step into 2026, it’s the perfect moment to reflect on what an extraordinary year 2025 was for the Splunk ...

Operationalizing Entity Risk Score with Enterprise Security 8.3+

Overview Enterprise Security 8.3 introduces a powerful new feature called “Entity Risk Scoring” (ERS) for ...