Splunk Dev

Max data ingestion via UDP on Heavy Forwarder

ewan000
Path Finder

Hi,

We are ingesting syslog and statsd data directly on a Heavy Forwarder via upd inputs

This seems to work fine for small volumes of data, but heavy load causes all the ingestion on these inputs to drop off. even ones not receiving large amounts of data.

What limits are there in splunk or linux which might cause this behaviour and does anyone know of a fix?

PS. I know the recommendation is to use a separate syslog box and then read the files. But the reason for this are always given as : "you lose data when you restart splunk" which is not a problem for us. Is there a secret reason?!

0 Karma

codebuilder
Influencer

Increase maxKBps on your forwarder(s). The default value is 256. Increase this setting to a value appropriate to your system and cycle the daemon.

This setting is in the [thruput] stanza within limits.conf

https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf#.5Bthruput.5D

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma

ewan000
Path Finder

Ill give this a go thanks

0 Karma

ewan000
Path Finder

it should be noted though that the heavy forwarder is not indexing the data, just passing it on to indexers

0 Karma

codebuilder
Influencer

That setting controls how much network thruput the forwarder is allowed to consume. The default setting is generally much too low for most implementations.

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...