We found that a couple of things were causing such issues. These are not necessarily the same issue you're seeing.
I did some math and realized that we had some blocking because our Universal Forwarder was hitting its default limits.conf
[thruput]
maxKBps = 256
So we changed that to 0, which makes it unlimited. Keep in mind this impacts CPU on the host system where the forwarder lives.
This allowed the forwarder to catch up to itself. I was then able to analyze the metrics.log on the forwarder to see what thruput was required based on actual (the other option is to do some math and figure out how much thruput you need).
The other thing was that we had to disable useACK in my forwarder's outputs.conf so its
[tcpout:mygroup]
useACK = false
This was because the ACKs caused even more thruput and pauses.
So in conclusion, check out the metrics.log and take a hard look at where the pipeline is backing up.
Hopefully that helps you as well?!
,We found that a couple of things were causing such issues. These are not necessarily the same issue you're seeing.
I did some math and realized that we had some blocking because our Universal Forwarder was hitting its default limits.conf
[thruput]
maxKBps = 256
So we changed that to 0, which makes it unlimited. Keep in mind this impacts CPU on the host system where the forwarder lives.
This allowed the forwarder to catch up to itself. I was then able to analyze the metrics.log on the forwarder to see what thruput was required based on actual (the other option is to do some math and figure out how much thruput you need).
The other thing was that we had to disable useACK in my forwarder's outputs.conf so its
[tcpout:mygroup]
useACK = false
This was because the ACKs caused even more thruput and pauses.
So in conclusion, check out the metrics.log and take a hard look at where the pipeline is backing up.
Hopefully that helps you as well?!
... View more