Getting Data In

Universal Forwarder failing to keep up

toddbruner
Explorer

I have a RH EL6 system receiving udp 514 traffic from network equipment and writing to a file /data/log/messages.
The inbound message traffic is very high but the system is writing to the file just fine. I installed a universal forwarder (4.3.3) and through our deployment server push down an app to monitor that file and send it to our 10 indexers.

In the three hours since implementing this, only the first 50 minutes of logs have been indexed.

Looking at netstat -pn we can see that the splunk forwarder connecting to one of our indexers at a time. Could this be the bottleneck? Any suggestions on how to improve the performance.

inputs.conf
[monitor:///data/logs/messages]
disabled = false
source = network
sourcetype = syslog
index = network

Thanks.

Tags (1)
0 Karma
1 Solution

toddbruner
Explorer

OK, for others looking into this, splunk support helped me with this (Thanks Seth).

in $SPLUNK_HOME/etc/apps/SplunkUniversalForwarder/default there is a limits.conf file
in it you can change the [thruput] stanza so that maxKBps = 0 to remove any rate limiting.

View solution in original post

0 Karma

toddbruner
Explorer

OK, for others looking into this, splunk support helped me with this (Thanks Seth).

in $SPLUNK_HOME/etc/apps/SplunkUniversalForwarder/default there is a limits.conf file
in it you can change the [thruput] stanza so that maxKBps = 0 to remove any rate limiting.

0 Karma
Get Updates on the Splunk Community!

Automatic Discovery Part 1: What is Automatic Discovery in Splunk Observability Cloud ...

If you’ve ever deployed a new database cluster, spun up a caching layer, or added a load balancer, you know it ...

Real-Time Fraud Detection: How Splunk Dashboards Protect Financial Institutions

Financial fraud isn't slowing down. If anything, it's getting more sophisticated. Account takeovers, credit ...

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...