Getting Data In

Universal forwarder can't keep up with multiple log files? How to deploy multiple forwarders on linux?

twstanley
New Member

We have a universal forwarder on linux that seems to get 'stuck' reading one of the two high event log files being read from an iSeries IFS.

The file it gets stuck on can be either of the two files, the files seem to get events at about the rate of 60 a second during the busy parts of the day, I think the forwarder just can't get to end of file to switch to the other file.

Is there any way to configure the forwarder for this situation? Add more 'reader' threads?

If not, how do I setup another forwarder on linux? Duplicate the /opt/splunkforwarder directory seems kind of problematic...

Tags (3)
0 Karma

dwaddle
SplunkTrust
SplunkTrust

An alternative you might want to consider is iSeries PASE. I doubt anyone from Splunk has ever tried this, but the PASE environment is supposed to be binary compatible to most AIX programs. You might be able to do something as simple as install the AIX universal forwarder and fire it up directly on the iSeries machine.

Of course, it may not work. (And even if it does work, it will be on shaky ground w.r.t. Splunk support) But it's worth the effort of a test. If you try this, please report back results.

0 Karma

vbumgarner
Contributor

If there are more than 20 megabytes left to read before EOF, then the forwarder will fall back to one stream, effectively. In most cases, this is fine and desirable, but in some cases it can be annoying.

If neither the indexer nor your network are not the bottleneck, then I'm wondering if the default maxKBps is getting you. It is set quite low to prevent high CPU usage on forwarders.

You can increase the value in limits.conf. The default is 256KB.

[thruput]
maxKBps = 256

Create a new app, called say YourCompanyForwarder, and make a limits.conf in local with a large number.

YourCompanyForwarder/local/limits.conf
[thruput]
maxKBps = 256000

adrianholguin
New Member

Having effectively the same issue. In reading this answer I am completely stuck on "In most cases, this is fine and desirable" can you provide examples of when this might be fine and desirable? I have had to make major changes due to this 20MB limitation. This is the only 'documentation' that speaks directly to the phenomena that I see. Additionally what is the ratio relationship of MaxKBps value with MB left before EOF? Thanks!

0 Karma
Get Updates on the Splunk Community!

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...