I'm trying to figure out the change order that I need to sucessfully implement ssl + certs across almost everything (forwarders, search head clusters, idx clusters, deployers, deployment server, cluster masters). The exception to this is uncontrolled upstream customer uf's.
I've tried one of my test machines and as soon as I enabled some of the recommended post spoodle | heartbleed cipher settings it broke its instantly with numerous handshake errors to every machine. This leads me to trying to find documentation regarding the best way to start.
This link shows the large number of potential areas that can break when enabling but doesn't show what order I should do them in so that data is STILL forwarded and is still searchable with minimal down time. Stopping every single instance to do this isn't an option.
The other thing is that we have client universal forwarders that forward traffic to intermediate UF's which then send this data to our forwarders. As we have no control over those we would still need to support non-ssl & insecure ssl ciphers When traffic is transferred internally it needs to be ssl.
I'm thinking the order might be something like :
The problem I have is that even when you configure limited ciphers on a particular box AND the other machine it is communicating to still have all of them configured it seems to break.
Every time the issue has been raised with the client i've run through the process of trying to do it and found that at each stage something is a show stopper. In a large distributed installation with many 3rd party managed uf's in use enabling SSL on certain machines stops traffic being indexed when it shouldn't. Really confusing.
MuS, im going to mark that waddle link as the answer as it does show a nice way through all the slides (http://www.duanewaddle.com/wp-content/uploads/2014/10/Splunk-SSL-Presentation.pdf) which part of the network communications become encrypted as each setting is turned on (which was what I was after).