Hi Team,
While setting up our new remote Heavy Forwarder, we configured it to collect data from 20 universal Forwarders and Syslog devices, averaging about 30GB daily. To control network bandwidth usage, we applied a maximum throughput limit of 1MBps (1024KBps) using the maxKBps setting in limits.conf on the new remote Heavy Forwarder. This setting is intended to cap the rate at which data is forwarded to our Indexers, aiming to prevent exceeding the specified bandwidth limit.
However, according to Splunk documentation, this configuration doesn't guarantee that data transmission will always stay below the set maxKBps. It depends on factors such as the status of processing queues and doesn't directly restrict the volume of data being sent over the network.
How can we ensure the remote HF is not exceeding the value set in maxKBps in any case.
Regards
VK
It's a rather philosophical question. The short answer is you can't.
The long answer is - depending on the definition of throughput, you can find a"lower-level" metric that you will not be able to control (for example, you can't get lower than line speed when sending the packet onto the wire). So setting throughput limits in limits.conf should get you below said limit on average but you can have bursts of data exceeding this.
In fact due to how network works the only way to put a hard cap on throughput would be to have a medium of a capped line speed.