Splunk Enterprise

Splunk Indexes data pointing to different storage

elend
Path Finder

Hi there,

I want to point the secondary storage of my splunk indexer to mix with another storage, like point it to cloud storage?

so it will like this one is the common:

[volume:hot1]
path = /mnt/fast_disk
maxVolumeDataSizeMB = 100000

[volume:s3volume]
storageType = remote
path = s3://<bucketname>/rest/of/path

 

is there a mechanism or reference to did this?

Labels (3)
0 Karma
1 Solution

livehybrid
SplunkTrust
SplunkTrust

Hi @elend 

You configure a volume in your indexes.conf which is your s3 location essentially, and then you can update all or individual indexes to use that volume by setting the remotePath eg 

remotePath = volume:<VOLUME_NAME>/$_index_name

the $_index_name is actually an internal variable so you don’t need to overwrite this.
in addition to the other docs I posted on the previous post it’s worth checking https://docs.splunk.com/Documentation/Splunk/9.4.2/Indexer/SmartStoresecuritystrategies too. 

 

 

🌟 Did this answer help you? If so, please consider:

    • Adding karma to show it was useful
    • Marking it as the solution if it resolved your issue
    • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing.


 

View solution in original post

PrewinThomas
Motivator

@elend 

As @richgalloway  mentioned,
This is exactly what SmartStore is designed for.

Hot buckets stay on local disk for fast ingestion and search and warm buckets are offloaded to remote storage (e.g., S3).


#https://docs.splunk.com/Documentation/SVA/current/Architectures/SmartStore


Regards,
Prewin
Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!

elend
Path Finder

Oh thankyou. so thats just point the bucket like the sample right?

0 Karma

livehybrid
SplunkTrust
SplunkTrust

Hi @elend 

You configure a volume in your indexes.conf which is your s3 location essentially, and then you can update all or individual indexes to use that volume by setting the remotePath eg 

remotePath = volume:<VOLUME_NAME>/$_index_name

the $_index_name is actually an internal variable so you don’t need to overwrite this.
in addition to the other docs I posted on the previous post it’s worth checking https://docs.splunk.com/Documentation/Splunk/9.4.2/Indexer/SmartStoresecuritystrategies too. 

 

 

🌟 Did this answer help you? If so, please consider:

    • Adding karma to show it was useful
    • Marking it as the solution if it resolved your issue
    • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing.


 

livehybrid
SplunkTrust
SplunkTrust

Hi @elend 

Yes, you can configure Splunk (since 7.2 I think) to use mixture of local storage and S3-compliant storage, including the likes of Amazon S3 using Splunk's SmartStore functionality, this essentially uses your local storage for hot buckets and as a local cache for buckets which are also stored in S3. Its more of a complex beast than I can go into here, and there are lots of things to consider - for example this is generally considered a one-way exercise!  

https://docs.splunk.com/Documentation/SVA/current/Architectures/SmartStore gives a good overview of the architecture, benefits and next steps.

Check out https://help.splunk.com/en/splunk-enterprise/administer/manage-indexers-and-indexer-clusters/9.3/man... for more info on setting up smartstore as well as https://help.splunk.com/en/splunk-enterprise/administer/manage-indexers-and-indexer-clusters/9.4/dep... which has some info on setting this up on a single indexer (as a starter, this will depend on your specific environment architecture).

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

0 Karma

richgalloway
SplunkTrust
SplunkTrust

You've just stumbled across SmartStore (S2).  S2 keeps hot buckets local and copies warm buckets to S3.  A cache of roughly 30 days of data is retained locally for faster search performance.

To implement S2 correctly, see https://docs.splunk.com/Documentation/Splunk/9.4.2/Indexer/AboutSmartStore

---
If this reply helps you, Karma would be appreciated.
0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

.conf25 Global Broadcast: Don’t Miss a Moment

Hello Splunkers, .conf25 is only a click away.  Not able to make it to .conf25 in person? No worries, you can ...

Observe and Secure All Apps with Splunk

 Join Us for Our Next Tech Talk: Observe and Secure All Apps with SplunkAs organizations continue to innovate ...

What's New in Splunk Observability - August 2025

What's New We are excited to announce the latest enhancements to Splunk Observability Cloud as well as what is ...