My networking team is curious about Transactions Per Second (TPS), and they have explained that by that they mean that they want to know how many connections will be made per second.
I haven't seen any information on whether or not the Universal Forwarder (UF) establishes a persistent connection, or if it connects when it needs to send data and/or check-in.
Is there any information on if it is a persistent connection, or how many connections per second it makes?
We're currently on Splunk 6.6.2.
This is the sheet they sent me from F5:
Type | Measurement
Traffic Processing L7 requests per second
L4 connections per second
L4 HTTP requests per second
Maximum L4 concurrent connections
Throughput (Gbps) #### I know that I'll need to calculate this.
SSL/TLS: TPS (transactions per second)
So if you are talking about running UF communications through a load balancer. Don't do that. it is unsupported and you will have tears. Instead follow the UF deployment docs and use a DNS name with the ips of all the target indexers and deploy a config with just that DNS names. The UFs will rotate across the IPs in the record. That is best of the options as it is often easier to change a DNS record than update a conf file across lots of UFs.
Thanks for the response, Starcher. This will be for my endpoints on the internet to send their data internally. I'm intending to put Universal Forwarders in the DMZ to relay the data to our internal network.
With that said, my load balancer is my front end. Are you suggesting giving external IP addresses to each UF/HF I put in the DMZ?
If i have not misunderstood, I hope to archive the same goal you're aiming for.
I'm fine with UF load balancing system against indexers when that's installed on a win/nix client that send directly to an index cluster. At the end of the day, if the UF goes down due to server failure or scheduled maintenance, the server itself goes down with him.
I want to have forwarder redundancy for those devices which can send syslog just to a single destination.
Did you find any solution? I was thinking to just create a VIP/Pool/monitor for each needed TCP/UDP data input.
Ive had the unfortunate scenario where we thought a Load Balancer would be the answer. It isnt.. We thought it'd be fine. It Wasnt. 😞
As suggested by starcher - Your best bet is to create a DNS entry (e.g. splunk.ext.yourcompany.com) and set an A record for each Splunk Indexer (or Forwarder) on your boundary.
I think if it was me I'd have multiple UF on the boundary (where one side can be reached by the external servers/machines and the other side can see your indexers).
By setting multiple A records on the DNS entry, Splunk UF on your remote machines will round-robin their connections to the forwarders on the boundary.
This allows you to scale appropriately as you can add additional A records and forwarders as required.
I appreciate you sharing your experience, livehybrid. I've shared with my team your response.
We have some very strong network engineers and they're hoping to get more details from your experience. What kind of failures were you seeing when you tried using a load balancer?
Looking at F5, they've said there's a few different ways to configure the load balancers, from sticky connections to round robin, and details around the state of the connections.
They're alluding to plenty of room for error on the load balancer configuration side, but without knowing the details, they're likely going to go down the "trust by verify" route.