@sureshkumaar When the thruput limit is reached, monitoring pauses and the following events are recorded in splunkd.log Run this command:- /opt/splunkforwarder/bin/splunk btool inputs list --debu...
See more...
@sureshkumaar When the thruput limit is reached, monitoring pauses and the following events are recorded in splunkd.log Run this command:- /opt/splunkforwarder/bin/splunk btool inputs list --debug INFO TailingProcessor - Could not send data to output queue (parsingQueue), retrying... To verify how often the forwarder is hitting this limit, check the forwarder's metrics.log. (Look for this on the forwarder because metrics.log is not forwarded by default on universal and light forwarders.) cd /opt/splunkforwarder/var/log/splunk grep "name=thruput" metrics.log Example: The instantaneous_kbps and average_kbps are always under 256KBps. 11-19-2013 07:36:01.398 -0600 INFO Metrics - group=thruput, name=thruput, instantaneous_kbps=251.790673, instantaneous_eps=3.934229, average_kbps=110.691774, total_k_processed=101429722, kb=7808.000000, ev=122 Solution Create a custom limits.conf with a higher limit or no limit. The configuration can be in system/local or in an app that will have precedence on the existing limit. Example: Configure in a dedicated app, in /opt/splunkforwarder/etc/apps/Gofaster/local/limits.conf Double the thruput limit, from 256 to 512 KBps: [thruput] maxKBps = 512 Or for unlimited thruput: [thruput] maxKBps = 0 Unlimited speed can cause higher resource usage on the forwarder. Keep a limit if you need to control the monitoring and network usage. Restart to apply. Verify the result of the configuration with btool. Later, verify in metrics.log that the forwarder is not reaching the new limit constantly.