Knowledge Management

[SmartStore] bucket size and migration?

rbal_splunk
Splunk Employee
Splunk Employee

We are planning to move from local store to the remote store, and we have review splunk documentation? One of the recommendations is to set maxDataSize = auto?
Is there any reason to be concerned about these buckets while migrating to smart-store?

Tags (1)
0 Karma

esix_splunk
Splunk Employee
Splunk Employee

This also means we have to xfer 5gb buckets between the object store and the indexer. So there will be more time incurred by the search. Setting this to auto sets the buckets to 650mg, and in general testing with AWS S3, it takes a little under 2 seconds to copy the full bucket.

rbal_splunk
Splunk Employee
Splunk Employee

Splunk allow for historical buckets to be loaded into the smart store.This does mean we have to make 5 GB of space any time this bucket is needed for search.

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Index This | What travels the world but is also stuck in place?

April 2026 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Discover New Use Cases: Unlock Greater Value from Your Existing Splunk Data

Realizing the full potential of your Splunk investment requires more than just understanding current usage; it ...

Continue Your Journey: Join Session 2 of the Data Management and Federation Bootcamp ...

As data volumes continue to grow and environments become more distributed, managing and optimizing data ...