Knowledge Management

[SmartStore] Can I configure Splunk SmartStore indexer with multiple object stores

rbal_splunk
Splunk Employee
Splunk Employee

I'm doing a proof of concept of SmartStore with multiple object stores. There appears to be a defect where the remote.s3.access_key (and maybe remote.s3.secret_key) is not being properly associated with the volume stanza.

Specifically, in my indexes.conf, I have the following:

[volume:remote_store_0]
storageType = remote
path = s3://splunk-ss-01-0

remote.s3.access_key = [REDACTED_0]

remote.s3.secret_key = [REDACTED_0]

remote.s3.endpoint = http://xx.xx.xx.xxx>

[volume:remote_store_1]
storageType = remote
path = s3://splunk-ss-01-1
remote.s3.access_key = [REDACTED_1]
remote.s3.secret_key = [REDACTED_1]
remote.s3.endpoint = http://xx.xx.xx.xxx>

What is happening is that when I try to use remote_store_1 the access key for remote_store_0 is being used. Note that the endpoint and path are properly associated with the volume specification. It is at least the access_key (and maybe the secret key) that is not being properly associated with the volume stanza.

The bug is particularly annoying since doing splunk cmd splunkd rfs -- ls --starts-with volume:remote_store_1 will use the correct access_key that is associated with the volume.

Tags (1)
0 Karma

rbal_splunk
Splunk Employee
Splunk Employee

With Splunk Version 7.3.1 and above will let you configure indexes from an indexer to different smartstore objects , for example below I have configure _internal index to use one smartstore and _audit index to use other one.

===========First Smartstore Configuration=======

[volume:my_s3_vol]
storageType = remote
path = s3://newrbal1
remote.s3.access_key = AXXKIAIQWJDOATYCYFTTTTTKWZ5A
remote.s3.secret_key = dCCCCCCCCCCN7rMvSN96RSDDDDYqcKeSSSSi3TcD6YQS8J+EzQI5Qm+Ar9
remote.s3.endpoint = https://s3-us-east-2.amazonaws.com
remote.s3.signature_version = v4

===========Second Smartstore Configuration=======

using AWS S3 storage

[volume:aws_s3_vol]
storageType = remote
path = s3://luantest
remote.s3.access_key = AKIASVRRRRDSSVCAAAANBVKZXK4T
remote.s3.secret_key = JYD7umcpFFFFHKM4/uq7Wi/rfyUUHdcSFFFz3j2N85bg8wK
remote.s3.endpoint = https://s3-us-east-2.amazonaws.com
remote.s3.signature_version = v4

=============Here index _internal is configured with smartstore [volume:my_s3_vol]=====
[_internal]
thawedPath = $SPLUNK_DB/_internal/thaweddb
remotePath = volume:aws_s3_vol/$_index_name
repFactor = auto

=============Here index _internal is configured with smartstore [volume:aws_s3_vol]=====

[_audit]
thawedPath = $SPLUNK_DB/_audit/thaweddb
remotePath = volume:my_s3_vol/$_index_name
repFactor = auto

0 Karma

saiganesh49
Explorer

i recommend to skip 7.3.1 for smart store migration as it is having a serious bug which will freeze buckets while migration and have a high possible chance to loose your data.

0 Karma

srajarat2
Path Finder

Does Splunk support multiple S3 object stores configured within the same indexer cluster?

I understand the indexes.conf certainly allows this and I can configure each index to point to specific S3 storage but wanted to get clarified if this is indeed supported by Splunk.

0 Karma

saiganesh49
Explorer

Yes Splunk support's multiple S3 object stores configured within the same indexer cluster

0 Karma

srajarat2
Path Finder

Thank you.

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...