Deployment Architecture

Universal forwarder through a TLS load balancer and no client certificate

gc_os76
Explorer

Hi,

I've got a setup where my universal forwarder clients are going to submit logs to a Splunk index instance going through a L4 load balancer.

I'd like the communication between the universal forwarders and the balancer to be encrypted.

My setup would be something like:

UF > TLS LB > TCP input on the Splunk index

How can I enable the outputs on the UF side to be sent over TLS1.2 without the client certificate validation phase?

I did use a setting like

useSSL = true

on my forwarder.

According to this snippet of the outputs.conf  configuration page (https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Outputsconf) it should enable just the encrypted outgoing stream without requiring a client certificate (as in "legacy" mode):

----Secure Sockets Layer (SSL) Settings----

 To set up SSL on the forwarder, set the following setting/value pairs.
 If you want to use SSL for authentication, add a stanza for each receiver
 that must be certified.

useSSL = <true|false|legacy>
* Whether or not the forwarder uses SSL to connect to the receiver, or relies
  on the 'clientCert' setting to be active for SSL connections.
* You do not need to set 'clientCert' if 'requireClientCert' is set to
  "false" on the receiver.
* If set to "true", then the forwarder uses SSL to connect to the receiver.
* If set to "false", then the forwarder does not use SSL to connect to the
  receiver.
* If set to "legacy", then the forwarder uses the 'clientCert' property to
  determine whether or not to use SSL to connect.
* Default: legacy

 As universal forwarder client I'm using the latest Docker image provided by splunk and I push an outputs.conf to it using the deployment service.

The outputs.conf look like:

[tcpout]
defaultGroup=tcpin

[tcpout:tcpin]

useSSL = true
sslVersions = tls1.2
useClientSSLCompression = true
server=my_lb_dns_name:9997

 

From the container I'm able to reach the LB with the following command:

sudo -u splunk LD_LIBRARY_PATH=./lib ./bin/openssl s_client -connect my_lb_dns_name:9997

But in the splunkd.log I see warings like:

WARN TcpOutputProc - Cooked connection to ip=10.235.106.194:9997 timed out

 

Can someone helm figure out what I'm missing?

Thanks,

Giuseppe

 

 

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Do NOT use a load balancer in front of Splunk indexers.  The Splunk-to-Splunk protocol is not supported by third-party LBs so you'll likely not get the results you seek.  It's also not a supported configuration.  Universal Forwarders have built-in load balancing so an external LB is not needed.  

---
If this reply helps you, Karma would be appreciated.

mattymo
Splunk Employee
Splunk Employee

HI!

Before we go further down this rabbit hole🐰🕳 , please review the docs here that state "do not insert a LB in the Splunk2Splunk protocol connection". 

https://docs.splunk.com/Documentation/Forwarder/8.0.5/Forwarder/Configureloadbalancing#How_load_bala...

This is a topic i have feelings on, and I'm all for healthy questioning of constraints, but, before we go further, what are you hoping to get from the LB and who is your LB vendor?

What does your Splunk Arch look like? we talking many connections, high ingest?

Can we use DNS instead?

Do you have a mesh solution like istio?

- MattyMo

gc_os76
Explorer

Hi @mattymo and @richgalloway ,

thanks for the highlights.

Getting the LB out of the way should not be a big problem, atm, since the infrastructure is as simple as described in my previous post.

It is supposed to grow later but, since it will run in an AWS VPC, my main reason for introducing a LB was to give an unique and persistent entry point for log ingestion to all the clients I'm going to have in future.

Index instances failover was my main concern.

I guess dynamically update route53 records in case of re spawn could be a better option: any best practice I can follow?

 

Best regards,

Giuseppe

0 Karma

mattymo
Splunk Employee
Splunk Employee

I would recommend you start with :

https://docs.splunk.com/Documentation/Splunk/8.0.5/Forwarding/Setuploadbalancingd#Specify_static_or_...

And set DNS A records. See Splunk Docs. 

I won't go as far as to say never use a LB between UF and idx, cause i know with newer versions of UF and as i have worked with Splunk Eng, there are ways to make UF "LB Friendly" using newer version with the light event breaking UF does, but i would agree that if using traditional splunk2splunk protocol (s2s), just go all the way and rely on the application LB described in the doc above. 

holler at me on slack (mattymo) if you want to chat more!

- MattyMo
0 Karma

richgalloway
SplunkTrust
SplunkTrust

A simple, single-indexer Splunk environment does not have indexer fail-over as there is no other indexer to fail to.

If you expect to expand then it's a good idea to set up your indexer as a cluster of one.  Yes, you will need an extra instance to serve as the cluster master (CM), but it will help you in the end.  With an indexer cluster, you can set up the forwarders to use the Indexer Discovery feature.  That tells the UFs to ask the CM for a list of indexers to send to.  When the indexers change, UFs will learn about it automatically.

---
If this reply helps you, Karma would be appreciated.
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...