It seems the problem was a missing info in my client's inputs.conf. The destination index needed to be specified this way: [monitor:///tmp/instance.log] disabled=false sourcetype = generic_single_line index = minikube_on_hs9
... View more
Hi all, I'm trying to configure a tcpinput endpoint configuration in a way which makes me able to route incoming traffic in a few indexes. I'm trying to use tokens to do this. From Splunk web gui I can see that is possible to bind an HEC token to a certain index. It seems I'm not able to do the same with the TCP tokens. I've been following this documentation to define my token: https://docs.splunk.com/Documentation/Forwarder/8.0.5/Forwarder/Controlforwarderaccess and then defining this on my indexer instance's inputs.conf file: [splunktcp://9997] disabled = 0 index = minikube_on_hs9 [splunktcptoken://my_token] disabled = 0 index = minikube_on_hs9 token = $7$qlaEJcxHynjqXZHqCddO61xXxB/FUh/aooVPVFvjBEde9OnUZPx6Oz/Te8ye0lJKR/3tkNuCXjK8ccPLsKARgNAIkSg= after a restart of Splunk, the status of the token was as the one showed in the attached screenshot. At the moment of creation, the token seemed associated to the default index, though. On my test client, in the outputs.conf file, I have something like this: [tcpout] defaultGroup = tcpin-sre-tools token = $7$2ygFLiflfLjPs/n/jXxuOBI/aSgTK/Hwf+IcSSMkAtt6V+ATWCbOm4+95VpVPag05bco0qjlMuEckfcxtZDBa7h1fu0= [tcpout:tcpin-sre-tools] server = my_server_name:9997 No error related to token missing or mismatch, but still unable to address logs from my client into the specified index on the Splunk side. The logs get injected in the main index. Any idea how to fix this? Best regards, Giuseppe
... View more
Hi @mattymo and @richgalloway , thanks for the highlights. Getting the LB out of the way should not be a big problem, atm, since the infrastructure is as simple as described in my previous post. It is supposed to grow later but, since it will run in an AWS VPC, my main reason for introducing a LB was to give an unique and persistent entry point for log ingestion to all the clients I'm going to have in future. Index instances failover was my main concern. I guess dynamically update route53 records in case of re spawn could be a better option: any best practice I can follow? Best regards, Giuseppe
... View more
Hi, I've got a setup where my universal forwarder clients are going to submit logs to a Splunk index instance going through a L4 load balancer. I'd like the communication between the universal forwarders and the balancer to be encrypted. My setup would be something like: UF > TLS LB > TCP input on the Splunk index How can I enable the outputs on the UF side to be sent over TLS1.2 without the client certificate validation phase? I did use a setting like useSSL = true on my forwarder. According to this snippet of the outputs.conf configuration page (https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Outputsconf) it should enable just the encrypted outgoing stream without requiring a client certificate (as in "legacy" mode): ----Secure Sockets Layer (SSL) Settings----
To set up SSL on the forwarder, set the following setting/value pairs.
If you want to use SSL for authentication, add a stanza for each receiver
that must be certified.
useSSL = <true|false|legacy>
* Whether or not the forwarder uses SSL to connect to the receiver, or relies
on the 'clientCert' setting to be active for SSL connections.
* You do not need to set 'clientCert' if 'requireClientCert' is set to
"false" on the receiver.
* If set to "true", then the forwarder uses SSL to connect to the receiver.
* If set to "false", then the forwarder does not use SSL to connect to the
* If set to "legacy", then the forwarder uses the 'clientCert' property to
determine whether or not to use SSL to connect.
* Default: legacy As universal forwarder client I'm using the latest Docker image provided by splunk and I push an outputs.conf to it using the deployment service. The outputs.conf look like: [tcpout] defaultGroup=tcpin [tcpout:tcpin] useSSL = true sslVersions = tls1.2 useClientSSLCompression = true server=my_lb_dns_name:9997 From the container I'm able to reach the LB with the following command: sudo -u splunk LD_LIBRARY_PATH=./lib ./bin/openssl s_client -connect my_lb_dns_name:9997 But in the splunkd.log I see warings like: WARN TcpOutputProc - Cooked connection to ip=10.235.106.194:9997 timed out Can someone helm figure out what I'm missing? Thanks, Giuseppe
... View more