Hi there,
I want to point the secondary storage of my splunk indexer to mix with another storage, like point it to cloud storage?
so it will like this one is the common:
[volume:hot1] path = /mnt/fast_disk maxVolumeDataSizeMB = 100000
[volume:s3volume] storageType = remote path = s3://<bucketname>/rest/of/path
is there a mechanism or reference to did this?
Hi @elend
You configure a volume in your indexes.conf which is your s3 location essentially, and then you can update all or individual indexes to use that volume by setting the remotePath eg
remotePath = volume:<VOLUME_NAME>/$_index_name
the $_index_name is actually an internal variable so you don’t need to overwrite this.
in addition to the other docs I posted on the previous post it’s worth checking https://docs.splunk.com/Documentation/Splunk/9.4.2/Indexer/SmartStoresecuritystrategies too.
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing.
As @richgalloway mentioned,
This is exactly what SmartStore is designed for.
Hot buckets stay on local disk for fast ingestion and search and warm buckets are offloaded to remote storage (e.g., S3).
#https://docs.splunk.com/Documentation/SVA/current/Architectures/SmartStore
Regards,
Prewin
Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Oh thankyou. so thats just point the bucket like the sample right?
Hi @elend
You configure a volume in your indexes.conf which is your s3 location essentially, and then you can update all or individual indexes to use that volume by setting the remotePath eg
remotePath = volume:<VOLUME_NAME>/$_index_name
the $_index_name is actually an internal variable so you don’t need to overwrite this.
in addition to the other docs I posted on the previous post it’s worth checking https://docs.splunk.com/Documentation/Splunk/9.4.2/Indexer/SmartStoresecuritystrategies too.
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing.
Hi @elend
Yes, you can configure Splunk (since 7.2 I think) to use mixture of local storage and S3-compliant storage, including the likes of Amazon S3 using Splunk's SmartStore functionality, this essentially uses your local storage for hot buckets and as a local cache for buckets which are also stored in S3. Its more of a complex beast than I can go into here, and there are lots of things to consider - for example this is generally considered a one-way exercise!
https://docs.splunk.com/Documentation/SVA/current/Architectures/SmartStore gives a good overview of the architecture, benefits and next steps.
Check out https://help.splunk.com/en/splunk-enterprise/administer/manage-indexers-and-indexer-clusters/9.3/man... for more info on setting up smartstore as well as https://help.splunk.com/en/splunk-enterprise/administer/manage-indexers-and-indexer-clusters/9.4/dep... which has some info on setting this up on a single indexer (as a starter, this will depend on your specific environment architecture).
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing
You've just stumbled across SmartStore (S2). S2 keeps hot buckets local and copies warm buckets to S3. A cache of roughly 30 days of data is retained locally for faster search performance.
To implement S2 correctly, see https://docs.splunk.com/Documentation/Splunk/9.4.2/Indexer/AboutSmartStore