Deployment Architecture

Why is the Deployment Server Forwarder Management slow to return the number of clients connected after specifying an SslKeysFile value in server.conf?

coltwanger
Contributor

After specifying an SslKeysfile value in server.conf, the Deployment Server Forwarder Management interface is very slow to return the number of clients connected. I have over 1000 forwarders that usually report to this server, and without these configs, the list is fully fledged after only about a minute or so. I've waited several minutes after securing splunkd, but the list is very very slow to populate.

This actually came up because I've been running "SslKeysFile" (which is a typo), and I'm assuming that splunkd was never secured up until this point, when I started removing the deprecated SSL keys in SSLConfig (caCertFile, caPath, etc.)

My previous config (with the typo, fast forwarder list generation):

[sslConfig]
allowSslCompression = false
caCertFile = cacert.crt
caPath = $SPLUNK_HOME/etc/auth/mycerts
enableSplunkdSSL = True
requireClientCert = false
sslKeysFile = server.pem
sslVersions = *,-ssl2,-ssl3
sslPassword = <redacted>

My new config (slow list generation):

[sslConfig]
allowSslCompression = false
caCertFile = cacert.crt
caPath = $SPLUNK_HOME/etc/auth/mycerts
enableSplunkdSSL = True
requireClientCert = false
sslKeysfile = server.pem
sslVersions = *,-ssl2,-ssl3
sslPassword = <REDACTED>

And finally, with the new key values (kvstore because I'm also running SPLUNK_FIPS) (slow list generation):

[sslConfig]
serverCert = $SPLUNK_HOME/etc/auth/mycerts/server.pem
sslRootCAPath = $SPLUNK_HOME/etc/auth/mycerts/cacert.crt
sslPassword = <REDACTED>
allowSslCompression = false
enableSplunkdSSL = true
requireClientCert = false

[kvstore]
serverCert = $SPLUNK_HOME/etc/auth/mycerts/server.pem
sslPassword = <REDACTED>

All configurations except the first one result in a very slow return of deployment clients in the GUI. Has anyone seen this before? Any ideas on what to look for? Splunkd.log doesn't show anything that really stands out. This is really all that pops up (web_service.log):

2016-12-09 16:56:32,325 INFO    [584b52b93cb924f1f470] root:650 - CONFIG: error_page.default (instancemethod): <bound method ErrorController.handle_error of <controllers.error.ErrorController object at 0x000000B925F2F860>>
2016-12-09 16:55:19,733 ERROR   [584b503c1cefa7801470] root:129 - ENGINE: Handler for console events already off.
0 Karma
1 Solution

coltwanger
Contributor

I believe I've found what the issue was.

When we had PS out to help with installing our environment, we configured SSL on 8089 with "enableSplunkdSSL" with internal certs. We also configured forwarding from UFs to our environment with SSL.

It appears that the mistyped "sslKeysfile" key forced our Deployment Server to fall back on the default internal certificate. I found this by navigating to the REST port (https://splunk:8089) and checking the cert through the browser. It reported SplunkInternalCA.

Another factor was we appear to have never actually deployed certs to our forwarders, so they were using the default Splunk certificates to communicate with our Deployment Server. This was not caught because all appeared to be working correctly with the Deployment Server. In reality it was because the DS was using the default cert on 8089, and so were the UFs. When I fixed the typo, Splunk attempted to use my internally signed certificate, but the forwarders were having issues with the connection using their Splunk certs on one end and my internal certs on the other, causing the Deployment Server to very slowly (if at all) register UFs as clients. Enabling or disabling sslCompression had no effect.

If I use the new Splunk SSL values for server.conf with Splunk's internal certificate (what the UFs are expecting), everything runs smooth. The plan now is to take another look at correctly securing Splunk forwarding by pushing internal certs out to our UFs.

Another side effect of this was that using REST API calls to Splunk over 8089 would result in the following error:

The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

When we correctly secure splunkd (so the cert at https://splunk:8089 reports our internal certificate), this error also goes away. Interestingly enough, this error did not appear until our upgrade to 6.5.0.

As a side note, it's curious that Nessus complains about securing 8089 and SSL Compression, but not using default certs on 8089 🙂

View solution in original post

0 Karma

coltwanger
Contributor

I believe I've found what the issue was.

When we had PS out to help with installing our environment, we configured SSL on 8089 with "enableSplunkdSSL" with internal certs. We also configured forwarding from UFs to our environment with SSL.

It appears that the mistyped "sslKeysfile" key forced our Deployment Server to fall back on the default internal certificate. I found this by navigating to the REST port (https://splunk:8089) and checking the cert through the browser. It reported SplunkInternalCA.

Another factor was we appear to have never actually deployed certs to our forwarders, so they were using the default Splunk certificates to communicate with our Deployment Server. This was not caught because all appeared to be working correctly with the Deployment Server. In reality it was because the DS was using the default cert on 8089, and so were the UFs. When I fixed the typo, Splunk attempted to use my internally signed certificate, but the forwarders were having issues with the connection using their Splunk certs on one end and my internal certs on the other, causing the Deployment Server to very slowly (if at all) register UFs as clients. Enabling or disabling sslCompression had no effect.

If I use the new Splunk SSL values for server.conf with Splunk's internal certificate (what the UFs are expecting), everything runs smooth. The plan now is to take another look at correctly securing Splunk forwarding by pushing internal certs out to our UFs.

Another side effect of this was that using REST API calls to Splunk over 8089 would result in the following error:

The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

When we correctly secure splunkd (so the cert at https://splunk:8089 reports our internal certificate), this error also goes away. Interestingly enough, this error did not appear until our upgrade to 6.5.0.

As a side note, it's curious that Nessus complains about securing 8089 and SSL Compression, but not using default certs on 8089 🙂

0 Karma

jkat54
SplunkTrust
SplunkTrust

Why are you disabling ssl compression? This reduces network traffic by a factor of almost 12+.

Does it behave better if you do enable ssl compression? I have a hunch that it will.

0 Karma

sirkgm14vg
Explorer

Going to have to agree here. In Splunk Documentation it recommends SSL Compression for the reason you are posting about.

Note: We do not recommend that you
disable tls compression, as it can
cause bandwidth issues.

Configure Splunk forwarding to use your own certificates

0 Karma

coltwanger
Contributor

We're disabling SSL Compression because Splunk shows up in our security scans as vulnerable to the CRIME CVE.

0 Karma

coltwanger
Contributor

Can't edit my post for some reason. Splunk version 6.5.0 on a standalone Deployment Server

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...