Overnight, the indexer stopped receiving data from all of the forwarders. Up until that point, it was receiving data from them all fine without issues.
The splunkd.log on the forwarders shows the following error:
05-26-2016 09:48:15.956 +1000 WARN DeploymentClient - Unable to send handshake message to deployment server. Error status is: not_connected
05-26-2016 09:48:22.644 +1000 ERROR TcpOutputFd - Connection to host=externalip:9996 failed. sock_error = 0. SSL Error = error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
05-26-2016 09:48:22.644 +1000 WARN TcpOutputProc - Applying quarantine to idx=externalip:9996 numberOfFailures=2
In the excerpt above I have replaced my external IP with externalip.
We hadn't made any configuration changes before the issue occurred, but once it happened and I saw the error, I went ahead and replaced the default expiring certificates as per the recent email thinking this may have been the problem and restarted, but the issue is still happening.
I have tried updating the outputs.conf file on a forwarder to say sslVerifyServerCert = false but this didn't help, still got the same error.
I inherited this Splunk install when a colleague left a few months ago so I am still learning having never used Splunk before that, haven't been able to figure out what to try next.
Thanks for the suggestion, I thought I had checked them all but after you mentioned this again I went through a bit more meticulously and found a cert that expired yesterday! Thanks a lot for your assistance.