Hi,
We are ingesting syslog and statsd data directly on a Heavy Forwarder via upd inputs
This seems to work fine for small volumes of data, but heavy load causes all the ingestion on these inputs to drop off. even ones not receiving large amounts of data.
What limits are there in splunk or linux which might cause this behaviour and does anyone know of a fix?
PS. I know the recommendation is to use a separate syslog box and then read the files. But the reason for this are always given as : "you lose data when you restart splunk" which is not a problem for us. Is there a secret reason?!
Increase maxKBps on your forwarder(s). The default value is 256. Increase this setting to a value appropriate to your system and cycle the daemon.
This setting is in the [thruput] stanza within limits.conf
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf#.5Bthruput.5D
Ill give this a go thanks
it should be noted though that the heavy forwarder is not indexing the data, just passing it on to indexers
That setting controls how much network thruput the forwarder is allowed to consume. The default setting is generally much too low for most implementations.