I'm doing a proof of concept of SmartStore with multiple object stores. There appears to be a defect where the remote.s3.access_key (and maybe remote.s3.secret_key) is not being properly associated with the volume stanza.
Specifically, in my indexes.conf, I have the following:
[volume:remote_store_0]
storageType = remote
path = s3://splunk-ss-01-0
remote.s3.endpoint = http://xx.xx.xx.xxx>
[volume:remote_store_1]
storageType = remote
path = s3://splunk-ss-01-1
remote.s3.access_key = [REDACTED_1]
remote.s3.secret_key = [REDACTED_1]
remote.s3.endpoint = http://xx.xx.xx.xxx>
What is happening is that when I try to use remote_store_1 the access key for remote_store_0 is being used. Note that the endpoint and path are properly associated with the volume specification. It is at least the access_key (and maybe the secret key) that is not being properly associated with the volume stanza.
The bug is particularly annoying since doing splunk cmd splunkd rfs -- ls --starts-with volume:remote_store_1 will use the correct access_key that is associated with the volume.
With Splunk Version 7.3.1 and above will let you configure indexes from an indexer to different smartstore objects , for example below I have configure _internal index to use one smartstore and _audit index to use other one.
===========First Smartstore Configuration=======
[volume:my_s3_vol]
storageType = remote
path = s3://newrbal1
remote.s3.access_key = AXXKIAIQWJDOATYCYFTTTTTKWZ5A
remote.s3.secret_key = dCCCCCCCCCCN7rMvSN96RSDDDDYqcKeSSSSi3TcD6YQS8J+EzQI5Qm+Ar9
remote.s3.endpoint = https://s3-us-east-2.amazonaws.com
remote.s3.signature_version = v4
===========Second Smartstore Configuration=======
[volume:aws_s3_vol]
storageType = remote
path = s3://luantest
remote.s3.access_key = AKIASVRRRRDSSVCAAAANBVKZXK4T
remote.s3.secret_key = JYD7umcpFFFFHKM4/uq7Wi/rfyUUHdcSFFFz3j2N85bg8wK
remote.s3.endpoint = https://s3-us-east-2.amazonaws.com
remote.s3.signature_version = v4
=============Here index _internal is configured with smartstore [volume:my_s3_vol]=====
[_internal]
thawedPath = $SPLUNK_DB/_internal/thaweddb
remotePath = volume:aws_s3_vol/$_index_name
repFactor = auto
=============Here index _internal is configured with smartstore [volume:aws_s3_vol]=====
[_audit]
thawedPath = $SPLUNK_DB/_audit/thaweddb
remotePath = volume:my_s3_vol/$_index_name
repFactor = auto
i recommend to skip 7.3.1 for smart store migration as it is having a serious bug which will freeze buckets while migration and have a high possible chance to loose your data.
Does Splunk support multiple S3 object stores configured within the same indexer cluster?
I understand the indexes.conf certainly allows this and I can configure each index to point to specific S3 storage but wanted to get clarified if this is indeed supported by Splunk.
Yes Splunk support's multiple S3 object stores configured within the same indexer cluster
Thank you.