Getting Data In

How to set up SSL/TLS for the Splunk indexer?

kaurinko
Communicator

Hi,

I am trying to establish an SSL/TLS-connection with own certificates between the UFs and the indexer. I would also like to enable non-SSL connection for some UFs, but so far I haven't been able to have the indexer set up an SSL/TLS-listener. When everything is ready and done, I might be happy with a purely SSL/TLS-based environment, but for the transition period enabling both would be nice. I have tried to follow the instructions on page https://docs.splunk.com/Documentation/Splunk/8.0.1/Security/ConfigureSplunkforwardingtousesignedcert...

So far I have worked with the indexer. In ~splunk/etc/system/local/inputs.conf I have added the following:

[SSL]
serverCert = /opt/splunk/etc/auth/my-CA/IndexerCerts.txt
# sslPassword = This was supposedly unnecessary
sslRootCAPath = /opt/splunk/etc/auth/my-CA/myCACertificates.txt
requireClientCert = false
sslVersions = tls1.2

Also, I added to the ~splunk/etc/system/local/server.conf the following line

sslRootCAPath = /opt/splunk/etc/auth/my-CA/myCACertificates.txt

The file myCACertificates.txt contains as PEM the issuing CA for the IndexerCert and the RootCA on top of that and in this order. Effectively this is a trust store containing the SubCA and Root CA above the indexers certificates. I don't see any reason why any other similar SubCA and RootCA chain could not be included if a client certificate issued by those would be needed, but let's leave that for now.

The file IndexerCerts.txt contains the following items AS PEM in order:

  1. Indexer certificate
  2. Indexer private key (unencrypted)
  3. Issuing SubCA certificate
  4. RootCA certificate

The reason for the private key for being unencrypted is that the documentation I found let me understand it would not be needed or even not necessary. The password would still need to be available somewhere, but upon restart splunkd didn't complain about the configuration. Nevertheless, the connections remained as plain-text connections. The log says the following:

12-23-2019 17:25:34.062 +0200 INFO loader - Setting SSL configuration.
12-23-2019 17:25:34.062 +0200 INFO loader - Server supporting SSL versions TLS1.2
12-23-2019 17:25:34.062 +0200 INFO loader - Using cipher suite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256
12-23-2019 17:25:34.062 +0200 INFO loader - Using ECDH curves : prime256v1, secp384r1, secp521r1
...
12-23-2019 17:25:35.923 +0200 INFO TcpInputProc - Registering metrics callback for: tcpin_connections
12-23-2019 17:25:35.923 +0200 INFO TcpInputConfig - IPv4 port 9997 is reserved for splunk 2 splunk
12-23-2019 17:25:35.923 +0200 INFO TcpInputConfig - IPv4 port 9997 will negotiate s2s protocol level 6
12-23-2019 17:25:35.924 +0200 INFO TcpInputProc - Creating fwd data Acceptor for IPv4 port 9997 with Non-SSL

Any suggestions what I am doing wrong?

Labels (1)
0 Karma
1 Solution

kaurinko
Communicator

It seems like I had a stupid error in my configuration. I added

[splunktcp:9997]

[splunktcp-ssl:9998]

disabled=0

before the [SSL] stanza and left out the sslRootCAPath configuration entry. Now I get two listeners and I get a proper SSL/TLS handshake from port 9998.

View solution in original post

0 Karma

kaurinko
Communicator

It seems like I had a stupid error in my configuration. I added

[splunktcp:9997]

[splunktcp-ssl:9998]

disabled=0

before the [SSL] stanza and left out the sslRootCAPath configuration entry. Now I get two listeners and I get a proper SSL/TLS handshake from port 9998.

0 Karma

potnuru
Path Finder

Hi @kaurinko I have the Similar Requirement as you mentioned in the Question. Could you please share the complete inputs.conf and server.conf configuration. 

I didn't understand this "before the [SSL] stanza and left out the sslRootCAPath configuration entry" in your answer.

0 Karma

kaurinko
Communicator

Hi @potnuru ,

The configurations in etc/system/local are as follows:

inputs.conf:

[default]
host = splunk.tupislab.net

[splunktcp:9997]

[splunktcp-ssl:9998]

disabled=0

[SSL]
serverCert = /opt/splunk/etc/auth/My-CA/SplunkIndexerCerts.txt

requireClientCert = true
sslVersions = tls1.2

 
and server.conf:

[general]
serverName = splunk.mydomain.net
pass4SymmKey = ***

[sslConfig]
sslPassword = *****
sslRootCAPath = /opt/splunk/etc/auth/My-CA/CACertificates.txt
serverCert = /opt/splunk/etc/auth/My-CA/SplunkIndexerCerts.txt

[lmpool:auto_generated_pool_download-trial]
description = auto_generated_pool_download-trial
quota = MAX
slaves = *
stack_id = download-trial

[lmpool:auto_generated_pool_forwarder]
description = auto_generated_pool_forwarder
quota = MAX
slaves = *
stack_id = forwarder

[lmpool:auto_generated_pool_free]
description = auto_generated_pool_free
quota = MAX
slaves = *
stack_id = free

[license]
master_uri = https://license.tupislab.net:8089/


I hope this helps you.

Best regards,

Petri

potnuru
Path Finder

@kaurinko I think the above error is due to Port issue, may be the port is being used by some other process.. I have changed the Port and now it's working Perfectly, thanks for your help.

0 Karma

potnuru
Path Finder

@kaurinko 

I have configured the Splunk Indexer as given by you, but I am getting the below errors in _internal logs, Please help me on this.

2020-07-07 09:21:28.243 +0200 4027@splunk01 [main] ERROR io.dropwizard.cli.ServerCommand - Unable to start server, shutting down
java.lang.RuntimeException: java.io.IOException: Failed to bind to /127.0.0.1:9998
at org.eclipse.jetty.setuid.SetUIDListener.lifeCycleStarting(SetUIDListener.java:213)
at org.eclipse.jetty.util.component.AbstractLifeCycle.setStarting(AbstractLifeCycle.java:204)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:71)
at io.dropwizard.cli.ServerCommand.run(ServerCommand.java:53)
at io.dropwizard.cli.EnvironmentCommand.run(EnvironmentCommand.java:45)
at io.dropwizard.cli.ConfiguredCommand.run(ConfiguredCommand.java:87)
at io.dropwizard.cli.Cli.run(Cli.java:79)
at io.dropwizard.Application.run(Application.java:94)
at com.splunk.dbx.server.bootstrap.TaskServerStart.startTaskServer(TaskServerStart.java:123)
at com.splunk.dbx.server.bootstrap.TaskServerStart.streamEvents(TaskServerStart.java:73)
at com.splunk.modularinput.Script.run(Script.java:66)
at com.splunk.modularinput.Script.run(Script.java:44)
at com.splunk.dbx.server.bootstrap.TaskServerStart.main(TaskServerStart.java:152)
Caused by: java.io.IOException: Failed to bind to /127.0.0.1:9998
at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346)
at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307)
at org.eclipse.jetty.setuid.SetUIDListener.lifeCycleStarting(SetUIDListener.java:200)
... 12 common frames omitted
Caused by: java.net.BindException: Address already in use
at java.base/sun.nio.ch.Net.bind0(Native Method)
at java.base/sun.nio.ch.Net.bind(Net.java:461)
at java.base/sun.nio.ch.Net.bind(Net.java:453)
at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:227)
at java.base/sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:80)
at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342)
... 14 common frames omitted

0 Karma

kaurinko
Communicator

Hi @potnuru ,

Based on the log there most probably already is a process listening to 127.0.0.1:9998. Try issuing the following (as root to get the process name):

netstat -lntp | grep 9998

The error message Caused by: java.net.BindException: Address already in use means some other process is already listenining to the address:port you are trying to bind to.

Best regards,

Petri

0 Karma

burwell
SplunkTrust
SplunkTrust

I am not familiar with SubCA.

For us in 7.x we do the following on our heavy forwarders (like indexers).

For inputs,conf

[splunktcp-ssl:<port>]

[SSL]
requireClientCert = true
sslCommonNameToCheck = <comma separated list of common names>
serverCert = <concatenated server cert>

In the server.conf we have this

[sslConfig]

enableSplunkdSSL = true
requireClientCert = true
sslRootCAPath = <our ca cert>
serverCert = <our concatenated cert; see below)
sslCommonNameToCheck = <our common name>

For our cert we concatenate together:

cert
key
caCert

anilchaithu
Builder

@burwell 

what would be the ideal config on forwarders? Is it possible to setup outputs on forwarder without using sslPassword?

Since this configuration is on DS and will be deployed to all forwarders, I would like to avoid storing it in cleartext.

 

 

Tags (1)
0 Karma

kaurinko
Communicator

Hi Burwell,

The Sub CA is nothing more than just a synonym for an intermediate CA. In my case it is a CA issued by the RootCA, and the RootCA is by definition a self-signed CA-certificate. My idea has been that at some undefined point in the future I might move to using another intermediate CA issued by the same Root CA to cope with possible expiration problems. Some of the documentation hinted that such a trust-chain would be possible.

Is it possible to have the indexer listen to two ports: one for non-SSL and another for SSL-connections? Will addding a [splunktcp-ssl:] with the necessary SSL-configurations do the trick?

0 Karma

burwell
SplunkTrust
SplunkTrust

Yes you can have your indexers listen in two different ports: one with SSL and one not.

0 Karma

edhealea
Path Finder

How do you determine what goes down the 9997 pipe verses the encrypted 9998 pipe on your forwarders?

Tags (2)
0 Karma

kaurinko
Communicator

Hi,

You do that in inputs.conf and outputs.conf on the UF.

0 Karma

inventsekar
SplunkTrust
SplunkTrust

i think that depends on the inputs.conf on your UFs, isnt..

thanks and best regards,
Sekar

PS - If this or any post helped you in any way, pls consider upvoting, thanks for reading !
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...