Hello all,
I'm trying to connect my indexer cluster to an on premise s3 storage.
I'm using the master node to do it.
I've tested the access credentials with a standalone instance outside my cluster and it works.
Now, I'm trying to use 2 different apps to declare volume and index.
Like this :
.../master-apps/common_indexers/local/indexes.conf #volume stanza
[volume:bucket1]
storageType = remote
path = s3://bucket1
remote.s3.endpoint = https://mys3.fr
remote.s3.access_key = xx
remote.s3.secret_key = xx
remote.s3.signature_version = v2
remote.s3.supports_versionning = false
remote.s3.auth_region = EU
.../master-apps/common_indexes/local/indexes.conf #index stanza
[index1]
homePath = $SPLUNK_DB/$_index_name/db
thawedPath = $SPLUNK_DB/$_index_name/thaweddb
coldPath = $SPLUNK_DB/$_index_name/colddb
remotePath = volume:bucket1/$_index_name
<bundle_validation_errors on peer> [Critical] Unable to load remote volume "bucket1" of scheme "s3" referenced by index "index1": Could not find access_key and/or secret_key in a configuration file [Critical] in environment variables or via the AWS metadata endpoint.
Ok, Here's the answer :
The access_key need to be encrypted on each indexer separately after disribution with cluster master.
So it only seems to work if you first write it on the master "clear", then apply the bundle.
Ok, Here's the answer :
The access_key need to be encrypted on each indexer separately after disribution with cluster master.
So it only seems to work if you first write it on the master "clear", then apply the bundle.
Any idea ? Anyone ?