Security

For Splunk Enterprise, Splunk Light, and Hunk pre 6.3, default root certificates expire on July 21, 2016 - Recommendations?

Ellen
Splunk Employee
Splunk Employee

For Splunk Enterprise, Splunk Light and HUNK default root certificates prior to 6.3 will expire on July 21, 2016

What are the suggested recommendations?

1 Solution

Ellen
Splunk Employee
Splunk Employee

PRODUCT ADVISORY: Pre 6.3, Splunk Enterprise, Splunk Light and HUNK default root certificates expire on July 21, 2016.

(Updated: May 19, 2016)


SUMMARY

Instances of Splunk Enterprise, Splunk Light and HUNK that are older than 6.3 AND that are using the default certificates will no longer be able to communicate with each other after July 21, 2016 unless the certificates are replaced OR Splunk is upgraded to 6.3 or later.

Please note that for all Splunk Enterprise versions, the default root certificate that ships with Splunk is the same root certificate in every download.
That means that anyone who has downloaded Splunk has server certificates that have been signed by the same root certificate and would be able to authenticate to your certificates. To ensure that no one can easily snoop on your traffic or wrongfully send data to your indexers, we strongly recommend that you replace them with certificates signed by a reputable 3rd-party certificate authority.


IMPACT

Failure to replace expired certificates prior to this will result in the immediate cessation of network traffic for any connection which uses them.

Expiration of Splunk certificates does not affect:

1) Splunk instances that are in Splunk Cloud

  • SSL certificates used for Splunk Cloud instances are not the default Splunk certificates
  • Forwarder to Splunk Cloud traffic is not impacted, however, relay forwarders (forwarder to forwarder) can be impacted if you chose to use default Splunk certificates for this communication

2) Splunk instances that use certificates that are internally generated (self-signed) or obtained from an external Certificate Authority (CA).

3) Splunk instances in your configuration that are upgraded to 6.3 or above and use that version’s root certificates.

4) Splunk instances that do NOT use SSL - (This is the default configuration for forwarder to indexer communication)

Certificate expiration DOES affect Splunk deployments where:

Any or all Splunk instances in your deployment run a release prior to 6.3 and use Splunk default certificates. This includes

  • Search Heads
  • Indexers
  • License Masters
  • Cluster Masters
  • Deployers
  • Forwarders

RECOMMENDATIONS

There are several options that you can take to resolve certificate expiration. You must take action prior to July 21, 2016.

1) Remain at your current Splunk version (pre- 6.3) and manually upgrade the current default root certificates with the provided shell script that is appropriate for your operating system. Note that the shell script only replaces the current default root certificate with a new (cloned) certificate with a future expiration date. The script does not replace a Splunk default certificate with your own certificate.

The script is available at:

http://download.splunk.com/products/certificates/renewcerts-2016-05-05.zip

Update: minor script changes to update messages and remove redirect of stderr to /dev/null when checking OpenSSL version

Please be sure to read the README.txt included in the zip file before running the script.

2) Upgrade all Splunk instances in your environment to 6.3 or above and use self-signed or CA-signed certificate. We strongly recommend this as the most secure option. Replace current default root certificates with your own certificates. Download the following document to learn about hardening your Splunk infrastructure:

Splunk Security: Hardening Standards

3) Remain at your current Splunk version (pre- 6.3) and use self-signed or CA-signed certificate. Replace current default root certificates with your own certificates. Download the following document to learn about hardening your Splunk infrastructure.

Splunk Security: Hardening Standards

4) Upgrade ALL Splunk instances to 6.3 or above and use those default root certificates.

Note: Prior to the upgrade, if in use please remove the existing Splunk default certificate copies of ca.pem and cacert.pem
Refer to: Upgrading my Splunk Enterprise 6.2.x to 6.3.x did not upgrade the expiration dates on my default SSL...

See the following link to learn about adding certificates:
Securing Splunk Enterprise
Use the following procedure to configure default certificates:
Configure Splunk forwarding to use the default certificate

View solution in original post

nbowman
Path Finder

Does anyone know of a way to dump a list of deployment clients that are forwarding via splunktcp-ssl? So I can focus my efforts on those first.

0 Karma

jbarlow_splunk
Splunk Employee
Splunk Employee

something like this

index=_internal source=metrics.log group=tcpin_connections ssl=true | dedup hostname | table hostname sourceIp fwdType connectionType version destPort ssl

Lucas_K
Motivator

Finally got the renew script to run on a windows machine using only existing splunk binaries.

@echo off
set PATH=%PATH%;C:\Program Files\SplunkUniversalForwarder\bin\
powershell -ExecutionPolicy ByPass -File "C:\Program Files\SplunkUniversalForwarder\etc\apps\ssl_check_windows-x64\bin\s-renewcerts.ps1" -defaultCA -liveCA -serverCert -opensslConf "C:\Program Files\SplunkUniversalForwarder\openssl.cnf" -splunkHome "C:\Program Files\SplunkUniversalForwarder"

So far its only been tested on windows10. Any ideas on how to automatically grab the splunk home path into a variable would be great.

dvwijk
Explorer

works fine for me.. only added the following two commands for restarting the forwarder service:

net stop SplunkForwarder
net start SplunkForwarder

0 Karma

joshd
SplunkTrust
SplunkTrust

Are you sure about the side issues with the Universal Forwarders? What level of testing has been performed?

Right now in my lab we have the following:

  • Universal Forwarder v6.2.9 (ip-172-31-17-125)
  • Deployment Server v6.2.9 (ip-172.31.17.127)
  • Search Head v6.3.2 (ip-172.31.17.128)
  • Indexer v6.3.2 (ip-172.31.17.124)

The system time on all instances have been set to beyond the expiry date of the certificate (July 21).

[root@ip-172-31-17-125 auth]# openssl x509 -enddate -noout -in cacert.pem
notAfter=Jul 21 17:12:19 2016 GMT
[root@ip-172-31-17-125 auth]# date
Fri Jul 22 22:46:21 UTC 2016

As expected the Splunk SSL forwarding fails connectivity to the Indexer with the Indexer returning a message to check the certificate expiry:

07-22-2016 22:59:00.941 +0000 ERROR TcpOutputFd - Connection to host=172.31.17.124:9997 failed. sock_error = 0. SSL Error = error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed

But I am not observing any issues with DS communication. It actually completes the handshake succesfully with the DS:

[splunk@ip-172-31-17-125 ~]$ splunk display deploy-client
Deployment Client is enabled.
[splunk@ip-172-31-17-125 ~]$ splunk show deploy-poll
Deployment Server URI is set to "172.31.17.127:8089".

07-22-2016 22:58:12.841 +0000 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected
07-22-2016 22:58:17.846 +0000 INFO HttpPubSubConnection - SSL connection with id: connection_172.31.17.125_8089_ip-172-31-17-125.us-west-2.compute.internal_ip-172-31-17-125_uf-6.2
07-22-2016 22:58:17.850 +0000 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_172.31.17.125_8089_ip-172-31-17-125.us-west-2.compute.internal_ip-172-31-17-125_uf-6.2
07-22-2016 22:58:24.842 +0000 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_172.31.17.125_8089_ip-172-31-17-125.us-west-2.compute.internal_ip-172-31-17-125_uf-6.2
07-22-2016 22:58:24.843 +0000 INFO DC:HandshakeReplyHandler - Handshake done.

On the DS side I can see the client phone in, I can assign it to serverclasses and successfully have it download and install deployment applications, even performing a restart after install.

From the CLI/API perspective, I can run commands on the Universal Forwarder to interact as a normal admin might without issue:

[splunk@ip-172-31-17-125 ~]$ splunk disable deploy-client
Your session is invalid. Please login.
Splunk username: admin
Password:
Deployment Client is disabled.
You need to restart the Splunk Server (splunkd) for your changes to take effect.
[splunk@ip-172-31-17-125 ~]$ splunk edit user admin -password doesthiswork
Your session is invalid. Please login.
Splunk username: admin
Password:
User admin edited.

From the Search Head I can actually add the DS as a search peer, further showing splunk-to-splunk API access across 8089 is successful:

[splunk@ip-172-31-17-128 ~]$ splunk list search-server
Server at URI "172.31.17.124:8089" with status as "Up"
Server at URI "172.31.17.127:8089" with status as "Up"

UPDATE 1: Forgot to add, that we also in the lab spun up a DS with 6.3.2 and observe the same success with communication between the UF on 6.2.9 and DS on 6.3.2.

yannK
Splunk Employee
Splunk Employee

Those issues with splunkd are hypothetical,
It is only a problem if the clients are checking the cert validity, this is not default.

0 Karma

joshd
SplunkTrust
SplunkTrust

Thats what is trying to be demonstrated with the post...by default the functionality does not do validation of the client certificate, therefore by default the impact is less than whats stated because who really turns on client certificate validation without putting down their own set of certs at the same time? Therefore, environments using old forwarders with default certs for internal splunk-to-splunk communication can survive past July 21st under the condition that certificate is not being used for Splunk TCP SSL forwarding.

I am not arguing that keeping expired certs is a good practice, its just simply that users have more of a window to upgrade the UF or upgrade the cert, as long as the cert is not being used for Splunk TCP SSL forwarding. If the cert is being used for forwarding, then the rush is on...

Lucas_K
Motivator

+1 thanks joshd.

You wouldn't believe the number of forwarders we have to figure out the owners for. I ran a report and just for the ones that actually forward internal logs for our original estimates were off by 50% This would have had a massive negative impact on the company I work for. And might have resulted in some ass kicking somewhere along the line. There were already factions that were anti Splunk and this would only added to their arsenal.

0 Karma

joshd
SplunkTrust
SplunkTrust

I hear ya, I see those conversations all too often. We can't give the naysayers any fuel for their fight!

But with that being said I would never disagree with this being a good time for all those out there to kick off the development, and injection, of a process of ripping out vendor-provided certs in favor of their own. Definitely not easy for legacy stuff but at least now is the time to start building the process for future rollouts and get a good handle on @dwaddle's awesome SSL presentation/talks.

mcluver
Path Finder

joshd: Thanks for taking the time to do the testing on this, your comment and testing is making me feel a lot more comfortable about not having to update thousands of forwarders globally on a whim. I think we need a better response from Splunk than something hypothetical, our businesses depend on their software. I did find it kind of strange that they would shut everyone out and force them to upgrade every single instance of Splunk ever installed.

joshd
SplunkTrust
SplunkTrust

Thanks @mcluver ... glad the Discovered Intelligence team and I could help out -- your comments are the exact reason we jumped on this testing in the first place today 🙂

0 Karma

yannK
Splunk Employee
Splunk Employee

Joshd, thank you for the thorough testing,
i will update my initial answer to be less alarmist.

0 Karma

yannK
Splunk Employee
Splunk Employee

Remark specific for SplunkCloud customers.
The SSL certificates used for the outputs.conf are not the stock splunk certificate, therefore will not expire. And the forwarding to splunkcloud will continue.

For non cloud customer, the forwarding will be impacted by the expiration of the default cert if you are using splunktcp-ssl.

[EDIT after keen remarks and testing from joshd]

The splunkd and splunkweb processes will still work using the expired certificate on the forwarders.
The only situation to be aware is if you have setup your servers to validate the certificates (by example a deployment-server, or an API connector). If this is the case, we can assume that this is not the default behavior, and that you already switched the certificates to your own, and are already managing them.


Be healthy upgrade your forwarders.

alt text

JScordo
Path Finder

Will this have any impact on Splunk Cloud customers?

jbrodsky_splunk
Splunk Employee
Splunk Employee

No @Jscordo. by design each Splunk Cloud customer is issued specific, unique to their organization, certificates. This is what comes in the forwarder app that you are given when you become a Splunk Cloud customer. Splunk Cloud does NOT use the default root certificates that ship with Splunk Enterprise, Light, and Hunk.

0 Karma

joshd
SplunkTrust
SplunkTrust

@jbrodsky does this need to be qualified a bit further tho? Because the forwarder application provided for Cloud customers would only contain the outputs.conf therefore default certs would still be used for the REST API (default mgmt port 8089). However, as demonstrated with our tests below, this should not impact the operating of a Universal Forwarder or Heavy Forwarder (which has yet to be mentioned as well) since we were not able to observe any default enforcement of certificate validation on the REST API. Any on-prem Deployment Server used to manage the U/H Forwarders should also continue to use the default certs and work regardless of version.

So really from a Forwarder perspective, the only impact of all of this (Cloud customer, or not) would be on the use of Splunk TCP SSL forwarding (splunktcp-ssl).

Am I fair to say this?

0 Karma

jbrodsky_splunk
Splunk Employee
Splunk Employee

@joshd I believe you are correct. I also believe that your responses and @yannK 's responses are providing some needed clarification here. Yes - on Splunk Cloud we provide an outputs.conf, a specific custom SSL cert, and we turn on SSL forwarding with this cert, and we verify the server cert via the "sslVerifyServerCert=true" parameter. We don't do anything for Deployment Server (DS) because as you state, that's on prem.

In Splunk Cloud we do not run DS - you run DS on-prem. If you are using the default certs for this communication, then cert validation is not done by default, and communication between UF and on-prem DS should not be affected. I can also confirm that on a 6.2.3 forwarder here in my lab with completely stock configs, it communicates with my DS (6.3.3) just fine even though I set the date on it to 8/5/16 for testing. This is because there's no default enforcement of cert validation for UF to DS communication.

TL; DR - Splunk Cloud customers do not have default certs for forwarding into Splunk Cloud and should not be affected. IMHO: Splunk Cloud customers running an on-prem DS to configure their UFs who have not changed their default SSL configs should not be affected because certificate validation is not enforced. I am not the ultimate authority on this matter however, and would like confirmation from others at Splunk.

0 Karma

mcluver
Path Finder

@jbrodsky_splunk I think we're finally getting to the crux of the issue that some of us are having with the language and implications in the advisory. There should be a statement that clearly indicates that using the default root cert is not the default behavior of Splunk out of the box. Therefore as tested and proven, you may still have older forwarders existing in your deployment and the expiring root cert will not affect them.

The reason for this is as you mention outputs.conf requires the SSL parameters to be set explicitly to force SSL between forwarders and indexers, again not the default behavior as implied. The only other communication is splunkd on 8089 by default, which has its SSL parameters defined in server.conf. When you look at the server.conf specs, you could easily get confused that this advisory would affect your deployment, since enableSplunkdSSL is set to true by default.

You hit the nail on the head however with regard to the sslVerifyServerCert parameter, this by default on a Splunk install is false, not forcing SSL communication for splunkd either. So we may surmise that in fact, there is no SSL communication by default forced by Splunk, this is probably by design for just this scenario.

sslVerifyServerCert = true|false
* Used by distributed search: when making a search request to another
server in the search cluster.
* Used by distributed deployment clients: when polling a deployment
server.
* If this is set to true, you should make sure that the server that is
being connected to is a valid one (authenticated). Both the common
name and the alternate name of the server are then checked for a
match if they are specified in this configuration file. A
certificiate is considered verified if either is matched.
* Default is false.

http://docs.splunk.com/Documentation/Splunk/latest/Admin/Serverconf

0 Karma

weeb
Splunk Employee
Splunk Employee

The following steps will update the expiration date to 10 years into the future for existing key and append it to the existing cacert.pem certificate.

  1. Stop Splunk
  2. Run the following:

    $ openssl req -new -key ca.pem -x509 -days 3650 > cacert.pem
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Country Name (2 letter code) [XX]:US
    State or Province Name (full name) []:CA
    Locality Name (eg, city) [Default City]:San Francisco
    Organization Name (eg, company) [Default Company Ltd]:Splunk
    Organizational Unit Name (eg, section) []:SplunkCommonCA
    Common Name (eg, your name or your server's hostname) []:SplunkCommonCA
    Email Address []:support@splunk.com

  3. $ cat cacert.pem > ca.pem

  4. Start Splunk.

  5. Confirm by running the following:

    $ openssl x509 -in "ca.pem" -text -noout
    $ openssl x509 -in "cacert.pem" -text -noout

triest
Communicator

Is $SPLUNK_HOME/etc/auth/openssl really correct?

Just FYI that doesn't exist on our systems; I would use /opt/splunk/bin/splunk cmd openssl

The executable is typically /opt/splunk/bin/openssl ; on some systems that works, but others I have to use /opt/splunk/bin/splunk cmd openssl due to needing to have the correct path to load libraries.

0 Karma

Madhan45
Path Finder

Irrespective of the version can we extend expiration date????

For example I have 6.2.0 on indexers& search head, 6.3.1 on Heavy forwarders. Did you mean I just need to follow this above procedure on all the instances to extend my expiration date????

Thanks in advance.

0 Karma
Take the 2021 Splunk Career Survey

Help us learn about how Splunk has
impacted your career by taking the 2021 Splunk Career Survey.

Earn $50 in Amazon cash!