Hi all,
I'm currently trying to setup a smartstore index using on-prem s3 compliant storage.
The logs I'm seeing in _internal related to the s3Client component are as follows:
statusCode=502 statusDescription="Error connecting: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name.
My (slightly redacted) config for the bucket in indexes.conf is as follows:
[volume:primary]
path = /opt/splunk/var/lib/splunk
maxVolumeDataSizeMB = 500000
[volume:remote_store]
storageType = remote
path = s3://splunk-smartstore/netapp-smartstore
remote.s3.access_key = access key
remote.s3.secret_key = secret key
remote.s3.endpoint = https://s3-sgws.domain:8082
remote.s3.encryption = none
remote.s3.sslVerifyServerCert = false
[netapp_smartstore]
homePath = volume:primary/netapp_smartstore/db
coldPath = $SPLUNK_DB/netapp_smartstore/colddb
thawedPath = $SPLUNK_DB/netapp_smartstore/thaweddb
repFactor = auto
remotePath = volume:remote_store/netapp_smartstore/colddb
maxGlobalDataSizeMB = 1024
hotlist_recency_secs = 3600
hotlist_bloom_filter_recency_hours = 3600
frozenTimePeriodInSecs = 31536000
maxDataSize = auto
I have successfully setup smartstore with an AWS bucket, the only differences being that using the public aws endpoints, their certs are set up and correct (we do not have that on site, just using default certs, so I'm not sure exactly which cert to check with the openssl verify command). That and I'd actually created the folders/path within the bucket before initiating smartstore whereas with the on prem version I thought that the folders would be created on instantiation. Not sure how this second difference would create any ssl errors though. I would have also thought the setting of remote.s3.sslVerifyServerCert = false would remove the chance of any cert errors.
Anyone who's run into this and can offer any advice, it is most welcome.
I am experiencing the same issue as well. I am on version 7.2.5. Have tried few different combination with certificates and also disabling SSL verification. I still get the unknown CA error when I use on-prem s3 storage. Did anyone find the solution for this? @Andrew_Callan did you manage to fix this issue? Please let me know.
Hi,
As you are using On-Prem S3 storage and your endpoint runs on https
, you need to configure root or intermediate CA certificate of your S3 instance certificate on Splunk.
Below parameter you need to configure in indexers.conf
remote.s3.sslRootCAPath = <path>
* Full path to the Certificate Authority (CA) certificate PEM format file
containing one or more certificates concatenated together. S3 certificate
will be validated against the CAs present in this file.
* Optional.
* Default: [sslConfig/caCertFile] in server.conf
Thanks for the answer!
Since posting I've tried a few other configs, turns out that the default cert used on the on prem s3 store was totally bogus, it had a number in it for the CN and there were no Alt Names. I've given it a proper certificate using our CA and installed the root CA using the following parameter:
remote.s3.sslRootCAPath = /opt/splunk/etc/certs/root_cert.pem
The cert has 400 permissions and is owned by Splunk. I've restarted with this parameter specified and with the remote.s3.sslVerifyServerCert parameter set to both true and false, still getting the unknown CA error however.
I will admit however that I'm not massively experienced with using custom certs in Splunk.
Can you please try below command to connect to s3 instance and check whether you are getting any error ?
/opt/splunk/bin/splunk cmd openssl s_client -connect s3-sgws.domain:8082 -CAfile /opt/splunk/etc/cs_certs/cs_root_cert.pem
EDIT: Above command updated.
I had done a btool with debug before posting, the false setting for verifying the cert was being read from my indexes.conf file in the app configuring smartstore. I'm not sure if there's just something peculiar about the s3 endpoint I'm using.
With the https:// scheme specified I get
getservbyname failure for //s3-sg.domain.net:8082
usage: s_client args
When not using the scheme at all I get a connection displaying the following and a prompt
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID:
Session-ID-ctx:
Master-Key:
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1560519821
Timeout : 300 (sec)
Verify return code: 0 (ok)
Verify retun code : 0 (ok) means CA certificate you provided is working fine. Is it possible to replicate this issue in Splunk 7.2 ? I tried smartstore on 7.2 with http only in my lab environment, never tried https.
Confirmed the same error exists on 7.2.6 while the command you gave me to run also returns the 0 return code.
I can't replicate in my lab at the moment but I may try in weekend, based on documentation https://docs.splunk.com/Documentation/Splunk/7.3.0/Indexer/SmartStoresecuritystrategies#Manage_SSL_c... , you may try to configure caCertFile (This is deprecated)
in server.conf but it will break other SSL communication between Inter-Splunk if you are using sslVerifyServerCert = true
in server.conf.
The S3 SSL settings are overlaid on the sslConfig stanza in server.conf, except for sslVerifyServerCert, sslAltNameToCheck, and sslCommonNameToCheck. Therefore, if you run into issues, consult the server.conf SSL settings, in addition to the remote-storage-specific settings.
I have tested Smartstore with HTTPS
in my lab environment and if you set remote.s3.sslVerifyServerCert = false
then it will work without setting remote.s3.sslRootCAPath
.
As you mentioned that you already tried remote.s3.sslVerifyServerCert = false
in this case I'll suggest to check indexes.conf configuration using btool and check smartstore configuration. If same config is available in any other indexes.conf then precedence will take place. More docs for precedence refer https://docs.splunk.com/Documentation/Splunk/7.3.0/Admin/Wheretofindtheconfigurationfiles
It'll take some setting up but I can try this in 7.2.
It may help to mention this is on version 7.3.0